joshuaswarren/openclaw-engram
Local-first memory plugin for OpenClaw AI agents. LLM-powered extraction, plain markdown storage, hybrid search via QMD. Gives agents persistent long-term memory across conversations.
Engram integrates as a native OpenClaw plugin and MCP server, supporting both cloud (OpenAI) and local LLM-powered extraction (Ollama, LM Studio) with zero API dependencies. Memories persist as git-friendly markdown files with YAML frontmatter and lifecycle management (fact, decision, preference, correction, entity tracking), using hybrid search (BM25 + vector reranking via QMD) to surface contextual knowledge at conversation start. The architecture uses a three-phase recall-buffer-extract pipeline triggered by conversation turns, enabling semantic memory injection across multiple agent harnesses and MCP clients (Claude Code, Codex CLI) on a single machine or distributed setups.
Available on npm.
Stars
13
Forks
6
Language
TypeScript
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Dependencies
9
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/joshuaswarren/openclaw-engram"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
aiming-lab/SimpleMem
SimpleMem: Efficient Lifelong Memory for LLM Agents
zilliztech/GPTCache
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
zilliztech/memsearch
A Markdown-first memory system, a standalone library for any AI agent. Inspired by OpenClaw.
RichmondAlake/memorizz
MemoRizz: A Python library serving as a memory layer for AI applications. Leverages popular...
TeleAI-UAGI/telemem
TeleMem is a high-performance drop-in replacement for Mem0, featuring semantic deduplication,...