remembra-ai/remembra
Universal memory layer for AI applications. Self-host in minutes. Open source.
Provides persistent memory with entity resolution, temporal decay patterns, and graph-aware recall—automatically extracting and linking facts across sessions. Implements hybrid BM25+vector search, PII detection, and conflict resolution, integrating via Model Context Protocol (MCP) with Claude Desktop, Cursor, Windsurf, and other AI agents. Deploys locally with embedded Qdrant and Ollama, offering Python/TypeScript SDKs plus a multi-tenant dashboard with role-based access and audit logging.
6 stars and 2,027 monthly downloads. Available on PyPI.
Stars
6
Forks
1
Language
HTML
License
MIT
Category
Last pushed
Mar 12, 2026
Monthly downloads
2,027
Commits (30d)
0
Dependencies
3
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/vector-db/remembra-ai/remembra"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
topoteretes/cognee
Knowledge Engine for AI Agent Memory in 6 lines of code
CaviraOSS/OpenMemory
Local persistent memory store for LLM applications including claude desktop, github copilot,...
divagr18/memlayer
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory...
CortexReach/memory-lancedb-pro
Enhanced LanceDB memory plugin for OpenClaw — Hybrid Retrieval (Vector + BM25), Cross-Encoder...
verygoodplugins/automem
AutoMem is a graph-vector memory service that gives AI assistants durable, relational memory: