Memori and ReMe

A SQL-native memory layer designed for production LLM systems complements a memory management abstraction kit focused on agent-level recall and refinement operations, as they address different layers—persistent storage versus semantic memory organization.

Memori
90
Verified
ReMe
70
Verified
Maintenance 25/25
Adoption 21/25
Maturity 24/25
Community 20/25
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 19/25
Stars: 12,351
Forks: 1,112
Downloads: 21,330
Commits (30d): 58
Language: Python
License:
Stars: 2,185
Forks: 161
Downloads:
Commits (30d): 52
Language: Python
License: Apache-2.0
No risk flags
No Package No Dependents

About Memori

MemoriLabs/Memori

SQL Native Memory Layer for LLMs, AI Agents & Multi-Agent Systems

Automatically intercepts and persists LLM conversations to SQL, then intelligently retrieves relevant context on subsequent queries—achieving 81.95% accuracy on long-context tasks while reducing token usage to ~5% of full-context approaches. Integrates directly with OpenAI, Anthropic, and other LLM providers via SDK wrappers, plus hooks into OpenClaw agents and MCP-compatible tools (Claude Code, Cursor) without requiring code changes. Supports bring-your-own-database deployments for self-hosted setups alongside cloud-hosted options.

About ReMe

agentscope-ai/ReMe

ReMe: Memory Management Kit for Agents - Remember Me, Refine Me.

Provides dual file-based and vector-based memory architectures that compress long conversations into persistent summaries while enabling hybrid semantic search (vectors + BM25). Automatically manages context windows through a Compactor component, persists agent knowledge across sessions as human-readable Markdown files, and includes a MemorySearch tool for retrieving relevant historical context. Integrates with LLM/embedding APIs and includes a file-watcher system that asynchronously summarizes conversations and caches embeddings.

Scores updated daily from GitHub, PyPI, and npm data. How scores work