OpenMemory and remembra
These are competitors offering different architectural approaches to LLM memory persistence—OpenMemory focuses on local storage integration across multiple AI platforms, while Remembra provides a universal memory abstraction layer designed for self-hosted deployment across diverse AI applications.
About OpenMemory
CaviraOSS/OpenMemory
Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.
Provides multi-sector memory (episodic, semantic, procedural) with temporal reasoning and composite scoring—not just vector retrieval—via self-hosted SQLite/Postgres backends. Offers both embedded SDKs (Python/Node) and a centralized server exposing HTTP API, MCP protocol, and dashboard, with source connectors for GitHub, Notion, Google Drive, and web crawling to populate long-term agent context.
About remembra
remembra-ai/remembra
Universal memory layer for AI applications. Self-host in minutes. Open source.
Provides persistent memory with entity resolution, temporal decay patterns, and graph-aware recall—automatically extracting and linking facts across sessions. Implements hybrid BM25+vector search, PII detection, and conflict resolution, integrating via Model Context Protocol (MCP) with Claude Desktop, Cursor, Windsurf, and other AI agents. Deploys locally with embedded Qdrant and Ollama, offering Python/TypeScript SDKs plus a multi-tenant dashboard with role-based access and audit logging.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work