nocturne_memory and memora
Both offer persistent memory layers for MCP agents, but they target different architectures: nocturne_memory emphasizes graph-structured rollbackable state with visual debugging, while memora focuses on semantic embeddings and knowledge graphs, making them **complements** that could be layered together depending on whether an agent needs deterministic state replay or semantic retrieval.
About nocturne_memory
Dataojitori/nocturne_memory
A lightweight, rollbackable, and visual Long-Term Memory Server for MCP Agents. Say goodbye to Vector RAG and amnesia. Empower your AI with persistent, graph-like structured memory across any model, session, or tool. Drop-in replacement for OpenClaw.
Implements a graph-based memory architecture with SQLite/PostgreSQL backends, where AI agents can create, update, and rollback their own structured memories through MCP—eliminating vector RAG's semantic lossy compression and enabling condition-triggered disclosure of hierarchical knowledge graphs with human-auditable versioning. Includes a visual dashboard for memory exploration, diff review, and governance; integrates natively with Claude Desktop, Cursor, and other MCP-compatible frameworks as a direct OpenClaw replacement.
About memora
agentic-box/memora
Give your AI agents persistent memory — MCP server for semantic storage, knowledge graphs, and cross-session context
Implements a Model Context Protocol (MCP) server with pluggable embedding backends (OpenAI, sentence-transformers, TF-IDF) and multi-tiered storage—local SQLite, Cloudflare D1, or S3/R2 with optional encryption and compression. Features include interactive knowledge graph visualization, RAG-powered chat with streaming LLM tool calling, event notifications for inter-agent communication, and automated memory deduplication via LLM comparison. Integrates with Claude Code and Codex CLI through stdio or HTTP transports.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work