LightMem and EverMemOS

LightMem provides an efficient memory-augmented generation framework for individual LLM inference, while EverMemOS offers persistent long-term memory infrastructure across multiple agents and platforms—making them complementary components that could be combined (LightMem's in-context memory with EverMemOS's persistent storage).

LightMem
74
Verified
EverMemOS
64
Established
Maintenance 20/25
Adoption 14/25
Maturity 24/25
Community 16/25
Maintenance 20/25
Adoption 10/25
Maturity 13/25
Community 21/25
Stars: 677
Forks: 58
Downloads: 61
Commits (30d): 7
Language: Python
License: MIT
Stars: 2,570
Forks: 283
Downloads: —
Commits (30d): 15
Language: Python
License: Apache-2.0
No Dependents
No Package No Dependents

About LightMem

zjunlp/LightMem

[ICLR 2026] LightMem: Lightweight and Efficient Memory-Augmented Generation

Employs a modular architecture with pluggable storage engines and retrieval strategies to manage long-term memory for LLMs and AI agents. Supports both cloud APIs (OpenAI, DeepSeek) and local deployment via Ollama, vLLM, and Transformers with integrated memory update mechanisms. Includes benchmark evaluation frameworks for LoCoMo and LongMemEval datasets, with hierarchical memory structures (StructMem) that preserve event-level bindings and cross-event connections.

About EverMemOS

EverMind-AI/EverMemOS

Long-term memory for your 24/7 OpenClaw agents across LLMs and platforms.

Provides structured memory extraction from conversations using LLM-based encoding, organizes data into episodes and user profiles stored across MongoDB/Milvus/Elasticsearch, and exposes a REST API for retrieval with BM25, semantic embedding, and agentic search capabilities. Integrates directly with OpenClaw agents and supports TEN Framework for real-time applications, Claude Code plugins, and computer-use scenarios requiring persistent context across sessions.

Scores updated daily from GitHub, PyPI, and npm data. How scores work