AmmarHassona/llm_cache_proxy
A Rust caching proxy for LLM APIs. Delivers 500x faster responses (4ms vs 2s) and 90% cost reduction via Redis + Qdrant caching. Drop-in compatible with OpenAI SDKs and Groq.
Stars
—
Forks
—
Language
Rust
License
MIT
Category
Last pushed
Feb 25, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/AmmarHassona/llm_cache_proxy"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
RediSearch/RediSearch
A query and indexing engine for Redis, providing secondary indexing, full-text search, vector...
redis/redis-vl-python
Redis Vector Library (RedisVL) -- the AI-native Python client for Redis.
redis-developer/redis-ai-resources
✨ A curated list of awesome community resources, integrations, and examples of Redis in the AI ecosystem.
luyug/GradCache
Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraint
redis-developer/redis-product-search
Visual and semantic vector similarity with Redis Stack, FastAPI, PyTorch and Huggingface.