Awesome-LLM-KV-Cache and Awesome-KV-Cache-Management

These are **competitors**, as both projects aim to be a curated list of research papers and code links related to KV cache optimization in LLMs, requiring users to choose one over the other for their primary resource.

Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 13/25
Maintenance 6/25
Adoption 10/25
Maturity 8/25
Community 9/25
Stars: 417
Forks: 26
Downloads:
Commits (30d): 0
Language:
License: GPL-3.0
Stars: 291
Forks: 9
Downloads:
Commits (30d): 0
Language:
License:
Stale 6m No Package No Dependents
No License No Package No Dependents

About Awesome-LLM-KV-Cache

Zefan-Cai/Awesome-LLM-KV-Cache

Awesome-LLM-KV-Cache: A curated list of 📙Awesome LLM KV Cache Papers with Codes.

Organizes research papers and implementations across nine specialized KV cache optimization categories—including compression, quantization, low-rank decomposition, and cross-layer utilization—enabling developers to track state-of-the-art inference acceleration techniques. Papers are mapped to their official implementations from research teams at DeepSeek, Microsoft, and others, with implementation links and recommendation ratings. The collection spans foundational work like StreamingLLM through recent advances in sparse attention and disaggregated serving architectures, targeting LLM inference optimization across various hardware and deployment scenarios.

About Awesome-KV-Cache-Management

TreeAI-Lab/Awesome-KV-Cache-Management

This repository serves as a comprehensive survey of LLM development, featuring numerous research papers along with their corresponding code links.

Scores updated daily from GitHub, PyPI, and npm data. How scores work