Memori and Memary

These are direct competitors offering similar SQL-based memory persistence layers for LLM agents, with MemoriLabs' implementation achieving significantly broader adoption and maintenance based on download and star metrics.

Memori
90
Verified
Memary
58
Established
Maintenance 25/25
Adoption 21/25
Maturity 24/25
Community 20/25
Maintenance 0/25
Adoption 14/25
Maturity 25/25
Community 19/25
Stars: 12,351
Forks: 1,112
Downloads: 21,330
Commits (30d): 58
Language: Python
License: —
Stars: 2,576
Forks: 193
Downloads: 39
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No risk flags
Stale 6m

About Memori

MemoriLabs/Memori

SQL Native Memory Layer for LLMs, AI Agents & Multi-Agent Systems

Automatically intercepts and persists LLM conversations to SQL, then intelligently retrieves relevant context on subsequent queries—achieving 81.95% accuracy on long-context tasks while reducing token usage to ~5% of full-context approaches. Integrates directly with OpenAI, Anthropic, and other LLM providers via SDK wrappers, plus hooks into OpenClaw agents and MCP-compatible tools (Claude Code, Cursor) without requiring code changes. Supports bring-your-own-database deployments for self-hosted setups alongside cloud-hosted options.

About Memary

kingjulio8238/Memary

The Open Source Memory Layer For Autonomous Agents

Implements a multi-layered memory architecture combining episodic memory streams, entity knowledge graphs (via FalkorDB or Neo4j), and dynamic user/system personas that automatically evolve through agent interactions. Supports both local models via Ollama (Llama 3, LLaVA) and OpenAI APIs with pluggable tools, enabling developers to integrate memory into existing LlamaIndex ReAct agents or use the built-in ChatAgent implementation with minimal code changes.

Scores updated daily from GitHub, PyPI, and npm data. How scores work