memsearch and openclaw-engram
Memsearch is a generalized memory library that inspired OpenClaw's specialized memory plugin, making them ecosystem siblings where the plugin implements domain-specific memory capabilities built on concepts pioneered by the broader framework.
About memsearch
zilliztech/memsearch
A Markdown-first memory system, a standalone library for any AI agent. Inspired by OpenClaw.
Implements semantic search over markdown files using pluggable embedding providers (ONNX, Google, Voyage, Ollama, local), with automatic file watching and SHA-256 dedup to skip re-embedding unchanged content. Stores vectors in a local database and exposes a simple async Python API that integrates seamlessly with LLM frameworks like OpenAI, Anthropic Claude, and Ollama for agent-driven recall-think-remember loops.
About openclaw-engram
joshuaswarren/openclaw-engram
Local-first memory plugin for OpenClaw AI agents. LLM-powered extraction, plain markdown storage, hybrid search via QMD. Gives agents persistent long-term memory across conversations.
Engram integrates as a native OpenClaw plugin and MCP server, supporting both cloud (OpenAI) and local LLM-powered extraction (Ollama, LM Studio) with zero API dependencies. Memories persist as git-friendly markdown files with YAML frontmatter and lifecycle management (fact, decision, preference, correction, entity tracking), using hybrid search (BM25 + vector reranking via QMD) to surface contextual knowledge at conversation start. The architecture uses a three-phase recall-buffer-extract pipeline triggered by conversation turns, enabling semantic memory injection across multiple agent harnesses and MCP clients (Claude Code, Codex CLI) on a single machine or distributed setups.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work