aaronloh16/semantic-cache
Drop-in semantic caching for LLM API calls. Save 30%+ on costs.
22
/ 100
Experimental
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
9 / 25
Community
0 / 25
Stars
—
Forks
—
Language
TypeScript
License
MIT
Category
Last pushed
Mar 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/aaronloh16/semantic-cache"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aiming-lab/SimpleMem
SimpleMem: Efficient Lifelong Memory for LLM Agents
81
zilliztech/GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
66
zilliztech/memsearch
A Markdown-first memory system, a standalone library for any AI agent. Inspired by OpenClaw.
62
ascottbell/maasv
Memory Architecture as a Service — cognition layer for AI assistants. 3-signal retrieval,...
55
TeleAI-UAGI/telemem
TeleMem is a high-performance drop-in replacement for Mem0, featuring semantic deduplication,...
54