SimpleMem and Structured-Memory-Engine
About SimpleMem
aiming-lab/SimpleMem
SimpleMem: Efficient Lifelong Memory for LLM Agents
Implements a three-stage semantic compression pipeline—structured compression, online synthesis, and intent-aware retrieval—to maximize information density while minimizing token overhead. Exposes memory functionality through MCP (Model Context Protocol) servers and Python packages, integrating with Claude Desktop, Cursor, LM Studio, and other AI platforms. Supports persistent cross-session memory that reportedly outperforms Claude's native memory by 64% on standard benchmarks.
About Structured-Memory-Engine
Bryptobricks/Structured-Memory-Engine
Persistent, self-maintaining memory for AI agents. 990 tests. <1ms recall. $0/month forever.
Implements a 6-signal ranking pipeline (keyword match + semantic similarity + recency + type priority + file weight + entity overlap) over SQLite FTS5 with local embeddings, enabling sub-50ms context injection without API calls. Features entity graph linking, confidence decay with configurable half-life, contradiction detection with temporal awareness, and query intent classification that surfaces action items or factual results based on question type. Ingests meeting transcripts into tagged markdown, auto-captures decisions from conversation, and includes a built-in recall benchmark suite to regression-test retrieval quality.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work