OpenMemory and remembra

These are competitors offering different architectural approaches to LLM memory persistence—OpenMemory focuses on local storage integration across multiple AI platforms, while Remembra provides a universal memory abstraction layer designed for self-hosted deployment across diverse AI applications.

OpenMemory
61
Established
remembra
53
Established
Maintenance 17/25
Adoption 10/25
Maturity 13/25
Community 21/25
Maintenance 13/25
Adoption 12/25
Maturity 18/25
Community 10/25
Stars: 3,604
Forks: 412
Downloads:
Commits (30d): 19
Language: TypeScript
License: Apache-2.0
Stars: 6
Forks: 1
Downloads: 2,027
Commits (30d): 0
Language: HTML
License: MIT
No Package No Dependents
No risk flags

About OpenMemory

CaviraOSS/OpenMemory

Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.

Provides multi-sector memory (episodic, semantic, procedural) with temporal reasoning and composite scoring—not just vector retrieval—via self-hosted SQLite/Postgres backends. Offers both embedded SDKs (Python/Node) and a centralized server exposing HTTP API, MCP protocol, and dashboard, with source connectors for GitHub, Notion, Google Drive, and web crawling to populate long-term agent context.

About remembra

remembra-ai/remembra

Universal memory layer for AI applications. Self-host in minutes. Open source.

Provides persistent memory with entity resolution, temporal decay patterns, and graph-aware recall—automatically extracting and linking facts across sessions. Implements hybrid BM25+vector search, PII detection, and conflict resolution, integrating via Model Context Protocol (MCP) with Claude Desktop, Cursor, Windsurf, and other AI agents. Deploys locally with embedded Qdrant and Ollama, offering Python/TypeScript SDKs plus a multi-tenant dashboard with role-based access and audit logging.

Scores updated daily from GitHub, PyPI, and npm data. How scores work