MSNP1381/cache-cool
🌟 Cache-cool: A fast, flexible LLM caching proxy that reduces latency and API costs by caching repetitive calls to LLM services. 🔄 Supports dynamic configurations, 📚 multiple backends (🟥 Redis, 🟢 MongoDB, 📁 JSON), and 🏗️ schema-specific settings.
No commits in the last 6 months.
Stars
29
Forks
2
Language
Python
License
—
Category
Last pushed
Aug 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/MSNP1381/cache-cool"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ModelEngine-Group/unified-cache-management
Persist and reuse KV Cache to speedup your LLM.
reloadware/reloadium
Hot Reloading and Profiling for Python
alibaba/tair-kvcache
Alibaba Cloud's high-performance KVCache system for LLM inference, with components for global...
October2001/Awesome-KV-Cache-Compression
📰 Must-read papers on KV Cache Compression (constantly updating 🤗).
xcena-dev/maru
High-Performance KV Cache Storage Engine on CXL Shared Memory for LLM Inference