Memori and mem0

These are direct competitors offering similar SQL-based persistent memory infrastructure for LLMs and agents, though mem0 has achieved significantly greater adoption despite reporting zero PyPI downloads (suggesting alternative distribution channels).

Memori
90
Verified
mem0
72
Verified
Maintenance 25/25
Adoption 21/25
Maturity 24/25
Community 20/25
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 12,351
Forks: 1,112
Downloads: 21,330
Commits (30d): 58
Language: Python
License:
Stars: 49,646
Forks: 5,542
Downloads:
Commits (30d): 180
Language: Python
License: Apache-2.0
No risk flags
No Package No Dependents

About Memori

MemoriLabs/Memori

SQL Native Memory Layer for LLMs, AI Agents & Multi-Agent Systems

Automatically intercepts and persists LLM conversations to SQL, then intelligently retrieves relevant context on subsequent queries—achieving 81.95% accuracy on long-context tasks while reducing token usage to ~5% of full-context approaches. Integrates directly with OpenAI, Anthropic, and other LLM providers via SDK wrappers, plus hooks into OpenClaw agents and MCP-compatible tools (Claude Code, Cursor) without requiring code changes. Supports bring-your-own-database deployments for self-hosted setups alongside cloud-hosted options.

About mem0

mem0ai/mem0

Universal memory layer for AI Agents

Implements multi-level memory (user, session, agent state) with adaptive retrieval that achieves 26% higher accuracy and 90% lower token usage than baseline approaches. Supports multiple LLMs and vector stores, with SDKs for Python and JavaScript, plus integrations for LangGraph and CrewAI. Offers both self-hosted open-source deployment and a managed platform with CLI tooling for memory management operations.

Scores updated daily from GitHub, PyPI, and npm data. How scores work