NVIDIA/kvpress
LLM KV cache compression made easy
954 stars. Actively maintained with 3 commits in the last 30 days.
Stars
954
Forks
121
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVIDIA/kvpress"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
intel/auto-round
🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality...
ModelCloud/GPTQModel
LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD...
pytorch/ao
PyTorch native quantization and sparsity for training and inference
BlinkDL/RWKV-LM
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly...
Picovoice/picollm
On-device LLM Inference Powered by X-Bit Quantization