Memori and MemoryOS

A SQL-native vector storage layer complements a memory operating system by providing the persistent, queryable backend infrastructure that a personalized agent OS would build upon for structured memory operations.

Memori
90
Verified
MemoryOS
52
Established
Maintenance 25/25
Adoption 21/25
Maturity 24/25
Community 20/25
Maintenance 13/25
Adoption 10/25
Maturity 9/25
Community 20/25
Stars: 12,351
Forks: 1,112
Downloads: 21,330
Commits (30d): 58
Language: Python
License:
Stars: 1,256
Forks: 127
Downloads:
Commits (30d): 4
Language: Python
License: Apache-2.0
No risk flags
No Package No Dependents

About Memori

MemoriLabs/Memori

SQL Native Memory Layer for LLMs, AI Agents & Multi-Agent Systems

Automatically intercepts and persists LLM conversations to SQL, then intelligently retrieves relevant context on subsequent queries—achieving 81.95% accuracy on long-context tasks while reducing token usage to ~5% of full-context approaches. Integrates directly with OpenAI, Anthropic, and other LLM providers via SDK wrappers, plus hooks into OpenClaw agents and MCP-compatible tools (Claude Code, Cursor) without requiring code changes. Supports bring-your-own-database deployments for self-hosted setups alongside cloud-hosted options.

About MemoryOS

BAI-LAB/MemoryOS

[EMNLP 2025 Oral] MemoryOS is designed to provide a memory operating system for personalized AI agents.

Implements a hierarchical memory architecture with four core modules (Storage, Updating, Retrieval, Generation) that manages short-term, mid-term, and long-term persona memory through dynamic updates and context-aware retrieval. Exposes memory capabilities via MCP Server with pluggable storage engines (including Chromadb vector database), multiple embedding models (BGE-M3, Qwen), and universal LLM support across OpenAI, Anthropic, Deepseek, and other providers for seamless agent integration.

Scores updated daily from GitHub, PyPI, and npm data. How scores work