Memori and superlocalmemory
These are **competitors**: both provide memory layers for LLMs/agents, but A prioritizes SQL-native integration with cloud/multi-agent scalability while B prioritizes local-only processing without external APIs.
About Memori
MemoriLabs/Memori
SQL Native Memory Layer for LLMs, AI Agents & Multi-Agent Systems
Automatically intercepts and persists LLM conversations to SQL, then intelligently retrieves relevant context on subsequent queries—achieving 81.95% accuracy on long-context tasks while reducing token usage to ~5% of full-context approaches. Integrates directly with OpenAI, Anthropic, and other LLM providers via SDK wrappers, plus hooks into OpenClaw agents and MCP-compatible tools (Claude Code, Cursor) without requiring code changes. Supports bring-your-own-database deployments for self-hosted setups alongside cloud-hosted options.
About superlocalmemory
qualixar/superlocalmemory
World's first local-only AI memory to break 74% retrieval and 60% zero-LLM on LoCoMo. No cloud, no APIs, no data leaves your machine. Additionally, mode C (LLM/Cloud) - 87.7% LoCoMo. Research-backed. arXiv: 2603.14588
Implements a hybrid retrieval architecture combining Fisher-Rao geodesic distance (information geometry), BM25 keyword matching, entity graph spreading activation, and temporal indexing with RRF fusion and cross-encoder reranking — entirely replacing cloud LLM dependency with mathematical guarantees from differential geometry and algebraic topology. Offers dual MCP + CLI interfaces for IDE integration and agent-native JSON output, with three operating modes (A: zero-cloud, B: local Ollama, C: optional cloud) supporting frameworks like Claude, Cursor, and custom agent pipelines. Designed for EU AI Act compliance and achieves 85% on open-domain retrieval without any cloud calls or GPU requirements.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work