Memori and LightMem

These are competitors offering alternative approaches to memory management for LLMs—MemoriLabs emphasizes SQL-native persistence and scalability for production multi-agent systems, while LightMem targets lightweight, efficient memory augmentation through a research-driven approach optimized for resource constraints.

Memori
90
Verified
LightMem
74
Verified
Maintenance 25/25
Adoption 21/25
Maturity 24/25
Community 20/25
Maintenance 20/25
Adoption 14/25
Maturity 24/25
Community 16/25
Stars: 12,351
Forks: 1,112
Downloads: 21,330
Commits (30d): 58
Language: Python
License:
Stars: 677
Forks: 58
Downloads: 61
Commits (30d): 7
Language: Python
License: MIT
No risk flags
No Dependents

About Memori

MemoriLabs/Memori

SQL Native Memory Layer for LLMs, AI Agents & Multi-Agent Systems

Automatically intercepts and persists LLM conversations to SQL, then intelligently retrieves relevant context on subsequent queries—achieving 81.95% accuracy on long-context tasks while reducing token usage to ~5% of full-context approaches. Integrates directly with OpenAI, Anthropic, and other LLM providers via SDK wrappers, plus hooks into OpenClaw agents and MCP-compatible tools (Claude Code, Cursor) without requiring code changes. Supports bring-your-own-database deployments for self-hosted setups alongside cloud-hosted options.

About LightMem

zjunlp/LightMem

[ICLR 2026] LightMem: Lightweight and Efficient Memory-Augmented Generation

Employs a modular architecture with pluggable storage engines and retrieval strategies to manage long-term memory for LLMs and AI agents. Supports both cloud APIs (OpenAI, DeepSeek) and local deployment via Ollama, vLLM, and Transformers with integrated memory update mechanisms. Includes benchmark evaluation frameworks for LoCoMo and LongMemEval datasets, with hierarchical memory structures (StructMem) that preserve event-level bindings and cross-event connections.

Scores updated daily from GitHub, PyPI, and npm data. How scores work