aiming-lab/SimpleMem
SimpleMem: Efficient Lifelong Memory for LLM Agents
Implements a three-stage semantic compression pipeline—structured compression, online synthesis, and intent-aware retrieval—to maximize information density while minimizing token overhead. Exposes memory functionality through MCP (Model Context Protocol) servers and Python packages, integrating with Claude Desktop, Cursor, LM Studio, and other AI platforms. Supports persistent cross-session memory that reportedly outperforms Claude's native memory by 64% on standard benchmarks.
3,182 stars and 2,317 monthly downloads. Used by 1 other package. Actively maintained with 11 commits in the last 30 days. Available on PyPI.
Stars
3,182
Forks
310
Language
Python
License
MIT
Category
Last pushed
Mar 10, 2026
Monthly downloads
2,317
Commits (30d)
11
Dependencies
8
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/aiming-lab/SimpleMem"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
zilliztech/GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
zilliztech/memsearch
A Markdown-first memory system, a standalone library for any AI agent. Inspired by OpenClaw.
ascottbell/maasv
Memory Architecture as a Service — cognition layer for AI assistants. 3-signal retrieval,...
TeleAI-UAGI/telemem
TeleMem is a high-performance drop-in replacement for Mem0, featuring semantic deduplication,...
RichmondAlake/memorizz
MemoRizz: A Python library serving as a memory layer for AI applications. Leverages popular...