nocturne_memory and memora

Both offer persistent memory layers for MCP agents, but they target different architectures: nocturne_memory emphasizes graph-structured rollbackable state with visual debugging, while memora focuses on semantic embeddings and knowledge graphs, making them **complements** that could be layered together depending on whether an agent needs deterministic state replay or semantic retrieval.

nocturne_memory
67
Established
memora
54
Established
Maintenance 25/25
Adoption 10/25
Maturity 13/25
Community 19/25
Maintenance 13/25
Adoption 10/25
Maturity 15/25
Community 16/25
Stars: 615
Forks: 79
Downloads:
Commits (30d): 101
Language: Python
License: MIT
Stars: 322
Forks: 34
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
No Package No Dependents

About nocturne_memory

Dataojitori/nocturne_memory

A lightweight, rollbackable, and visual Long-Term Memory Server for MCP Agents. Say goodbye to Vector RAG and amnesia. Empower your AI with persistent, graph-like structured memory across any model, session, or tool. Drop-in replacement for OpenClaw.

Implements a graph-based memory architecture with SQLite/PostgreSQL backends, where AI agents can create, update, and rollback their own structured memories through MCP—eliminating vector RAG's semantic lossy compression and enabling condition-triggered disclosure of hierarchical knowledge graphs with human-auditable versioning. Includes a visual dashboard for memory exploration, diff review, and governance; integrates natively with Claude Desktop, Cursor, and other MCP-compatible frameworks as a direct OpenClaw replacement.

About memora

agentic-box/memora

Give your AI agents persistent memory — MCP server for semantic storage, knowledge graphs, and cross-session context

Implements a Model Context Protocol (MCP) server with pluggable embedding backends (OpenAI, sentence-transformers, TF-IDF) and multi-tiered storage—local SQLite, Cloudflare D1, or S3/R2 with optional encryption and compression. Features include interactive knowledge graph visualization, RAG-powered chat with streaming LLM tool calling, event notifications for inter-agent communication, and automated memory deduplication via LLM comparison. Integrates with Claude Code and Codex CLI through stdio or HTTP transports.

Scores updated daily from GitHub, PyPI, and npm data. How scores work