gigabrain and bamdra-openclaw-memory
These are complements: A provides persistent cross-session memory with deduplication and markdown export, while B handles within-session memory management with topic awareness and token optimization—together enabling both long-term knowledge retention and bounded short-term context.
About gigabrain
legendaryvibecoder/gigabrain
Long-term memory layer for OpenClaw agents: capture, recall, dedupe, and native markdown sync.
Implements a SQLite+FTS5-backed memory architecture with an 11-step orchestrated recall pipeline and hybrid semantic/exact deduplication; supports OpenClaw, Codex, Claude Code, and Claude Desktop through MCP tools. Maintains a structured world model (entities, beliefs, contradictions, open loops) synchronized with native markdown files and Obsidian vaults, with deterministic nightly maintenance pipelines and optional FastAPI console for memory operations.
About bamdra-openclaw-memory
bamdra/bamdra-openclaw-memory
Give one OpenClaw session durable memory, topic-aware continuity, and bounded token growth.
Implements a three-plugin stack (memory runtime, user-bind identity layer, and vector knowledge indexing) that maintains durable conversation state across OpenClaw sessions while indexing local Markdown files for semantic recall. Uses incremental profile updates with frontmatter-sourced truth and tracks topic continuity to prevent token bloat and context drift. Integrates directly with OpenClaw's plugin system via `clawdhub` and supports private/shared Markdown roots for knowledge organization.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work