joshuaswarren/openclaw-engram

Local-first memory plugin for OpenClaw AI agents. LLM-powered extraction, plain markdown storage, hybrid search via QMD. Gives agents persistent long-term memory across conversations.

52
/ 100
Established

Engram integrates as a native OpenClaw plugin and MCP server, supporting both cloud (OpenAI) and local LLM-powered extraction (Ollama, LM Studio) with zero API dependencies. Memories persist as git-friendly markdown files with YAML frontmatter and lifecycle management (fact, decision, preference, correction, entity tracking), using hybrid search (BM25 + vector reranking via QMD) to surface contextual knowledge at conversation start. The architecture uses a three-phase recall-buffer-extract pipeline triggered by conversation turns, enabling semantic memory injection across multiple agent harnesses and MCP clients (Claude Code, Codex CLI) on a single machine or distributed setups.

Available on npm.

Maintenance 13 / 25
Adoption 5 / 25
Maturity 18 / 25
Community 16 / 25

How are scores calculated?

Stars

13

Forks

6

Language

TypeScript

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

0

Dependencies

9

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/joshuaswarren/openclaw-engram"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.