memora and mind-mem
These two tools are competitors, as both provide MCP-based persistent memory solutions for AI agents, but "star-ga/mind-mem" uniquely offers contradiction-safe memory with hybrid BM25 + vector retrieval, a co-retrieval graph, and MIND-accelerated scoring, while "agentic-box/memora" focuses on semantic storage and knowledge graphs.
About memora
agentic-box/memora
Give your AI agents persistent memory — MCP server for semantic storage, knowledge graphs, and cross-session context
Implements a Model Context Protocol (MCP) server with pluggable embedding backends (OpenAI, sentence-transformers, TF-IDF) and multi-tiered storage—local SQLite, Cloudflare D1, or S3/R2 with optional encryption and compression. Features include interactive knowledge graph visualization, RAG-powered chat with streaming LLM tool calling, event notifications for inter-agent communication, and automated memory deduplication via LLM comparison. Integrates with Claude Code and Codex CLI through stdio or HTTP transports.
About mind-mem
star-ga/mind-mem
Persistent, auditable, contradiction-safe memory for coding agents. Hybrid BM25 + vector retrieval, 19 MCP tools, co-retrieval graph, MIND-accelerated scoring. Zero external dependencies.
Implements shared memory across all MCP-compatible AI agents (Claude Code, Cursor, Windsurf, etc.) via a single SQLite workspace with concurrent-safe WAL mode. Core architecture combines BM25F full-text + vector retrieval with RRF fusion and intent-aware routing, plus a co-retrieval graph using PageRank-style propagation to surface structurally-related blocks. Includes active contradiction detection, drift analysis, and deterministic governance—all Markdown-backed with full audit trails and zero external dependencies.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work