brain-mcp and memora

Memora appears to be a server-side implementation of the Memory, Cognition, and Perception (MCP) system, providing semantic storage and knowledge graphs, while brain-mcp offers client-side tooling and utilities to interact with and leverage such a persistent memory system for AI agents.

brain-mcp
61
Established
memora
54
Established
Maintenance 13/25
Adoption 14/25
Maturity 18/25
Community 16/25
Maintenance 13/25
Adoption 10/25
Maturity 15/25
Community 16/25
Stars: 25
Forks: 6
Downloads: 818
Commits (30d): 0
Language: Python
License: MIT
Stars: 322
Forks: 34
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No Package No Dependents

About brain-mcp

mordechaipotash/brain-mcp

Your AI has amnesia. Persistent memory and cognitive context for AI. 25 MCP tools. 12ms recall.

Implements a progressive capability model—basic keyword search on raw conversations, semantic search with embeddings, and full domain reconstruction with AI-generated summaries—enabling AI assistants to surface cognitive patterns, unfinished threads, and evolved thinking across fragmented conversation histories from multiple tools (Claude, ChatGPT, Cursor). Operates as an MCP server exposing 25 specialized tools including semantic and keyword search, "prosthetic" functions like `tunnel_state` and `context_recovery` for domain re-entry, and analytics for identifying dormant contexts and thinking trajectories without requiring manual tagging.

About memora

agentic-box/memora

Give your AI agents persistent memory — MCP server for semantic storage, knowledge graphs, and cross-session context

Implements a Model Context Protocol (MCP) server with pluggable embedding backends (OpenAI, sentence-transformers, TF-IDF) and multi-tiered storage—local SQLite, Cloudflare D1, or S3/R2 with optional encryption and compression. Features include interactive knowledge graph visualization, RAG-powered chat with streaming LLM tool calling, event notifications for inter-agent communication, and automated memory deduplication via LLM comparison. Integrates with Claude Code and Codex CLI through stdio or HTTP transports.

Scores updated daily from GitHub, PyPI, and npm data. How scores work