zzbright1998/SentenceKV
Official implementation of "SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching" (COLM 2025). A novel KV cache compression method that organizes cache at sentence level using semantic similarity.
No commits in the last 6 months.
Stars
11
Forks
1
Language
Python
License
—
Category
Last pushed
Sep 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zzbright1998/SentenceKV"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
intel/auto-round
🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality...
ModelCloud/GPTQModel
LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD...
pytorch/ao
PyTorch native quantization and sparsity for training and inference
Picovoice/picollm
On-device LLM Inference Powered by X-Bit Quantization
NVIDIA/kvpress
LLM KV cache compression made easy