LightMem and EverMemOS
LightMem provides an efficient memory-augmented generation framework for individual LLM inference, while EverMemOS offers persistent long-term memory infrastructure across multiple agents and platforms—making them complementary components that could be combined (LightMem's in-context memory with EverMemOS's persistent storage).
About LightMem
zjunlp/LightMem
[ICLR 2026] LightMem: Lightweight and Efficient Memory-Augmented Generation
Employs a modular architecture with pluggable storage engines and retrieval strategies to manage long-term memory for LLMs and AI agents. Supports both cloud APIs (OpenAI, DeepSeek) and local deployment via Ollama, vLLM, and Transformers with integrated memory update mechanisms. Includes benchmark evaluation frameworks for LoCoMo and LongMemEval datasets, with hierarchical memory structures (StructMem) that preserve event-level bindings and cross-event connections.
About EverMemOS
EverMind-AI/EverMemOS
Long-term memory for your 24/7 OpenClaw agents across LLMs and platforms.
Provides structured memory extraction from conversations using LLM-based encoding, organizes data into episodes and user profiles stored across MongoDB/Milvus/Elasticsearch, and exposes a REST API for retrieval with BM25, semantic embedding, and agentic search capabilities. Integrates directly with OpenClaw agents and supports TEN Framework for real-time applications, Claude Code plugins, and computer-use scenarios requiring persistent context across sessions.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work