aneequrrehman/recall
AI memory layer that lives in your stack
Provides LLM-powered fact extraction with intelligent consolidation (ADD/UPDATE/DELETE/NONE decisions) and vector similarity search, storing all data in your existing database—SQLite, Postgres, or MySQL. Pluggable architecture lets you swap embedding providers (OpenAI, Cohere, Voyage) and extractors independently. Includes an experimental structured memory mode using Zod schemas for SQL-queryable, analytics-ready data.
Stars
22
Forks
—
Language
TypeScript
License
MIT
Category
Last pushed
Mar 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/aneequrrehman/recall"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
aiming-lab/SimpleMem
SimpleMem: Efficient Lifelong Memory for LLM Agents
zilliztech/GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
zilliztech/memsearch
A Markdown-first memory system, a standalone library for any AI agent. Inspired by OpenClaw.
RichmondAlake/memorizz
MemoRizz: A Python library serving as a memory layer for AI applications. Leverages popular...
TeleAI-UAGI/telemem
TeleMem is a high-performance drop-in replacement for Mem0, featuring semantic deduplication,...