Memori and memwire

The tools are competitors, as both aim to provide an open-source, self-hosted memory infrastructure layer for AI agents, with MemoriLabs/Memori offering a more mature and widely adopted SQL-native solution compared to memoryoss/memwire.

Memori
90
Verified
memwire
41
Emerging
Maintenance 25/25
Adoption 21/25
Maturity 24/25
Community 20/25
Maintenance 13/25
Adoption 10/25
Maturity 18/25
Community 0/25
Stars: 12,351
Forks: 1,112
Downloads: 21,330
Commits (30d): 58
Language: Python
License:
Stars: 6
Forks:
Downloads: 309
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
No risk flags

About Memori

MemoriLabs/Memori

SQL Native Memory Layer for LLMs, AI Agents & Multi-Agent Systems

Automatically intercepts and persists LLM conversations to SQL, then intelligently retrieves relevant context on subsequent queries—achieving 81.95% accuracy on long-context tasks while reducing token usage to ~5% of full-context approaches. Integrates directly with OpenAI, Anthropic, and other LLM providers via SDK wrappers, plus hooks into OpenClaw agents and MCP-compatible tools (Claude Code, Cursor) without requiring code changes. Supports bring-your-own-database deployments for self-hosted setups alongside cloud-hosted options.

About memwire

memoryoss/memwire

Open source self-hosted AI memory infrastructure layer

Implements graph-based semantic memory with categorized facts (preferences, events, entities, instructions) that strengthen or decay based on feedback loops, enabling persistent context recall across conversations. Provides both Python SDK and FastAPI REST interface, integrating with any LLM provider (OpenAI, Anthropic, Ollama) and vector stores (Qdrant, Pinecone, ChromaDB), while supporting multi-tenant isolation and knowledge base ingestion alongside conversation memory.

Scores updated daily from GitHub, PyPI, and npm data. How scores work