brain-mcp and MARM-Systems

brain-mcp
61
Established
MARM-Systems
48
Emerging
Maintenance 13/25
Adoption 14/25
Maturity 18/25
Community 16/25
Maintenance 10/25
Adoption 10/25
Maturity 9/25
Community 19/25
Stars: 25
Forks: 6
Downloads: 818
Commits (30d): 0
Language: Python
License: MIT
Stars: 251
Forks: 42
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No Package No Dependents

About brain-mcp

mordechaipotash/brain-mcp

Your AI has amnesia. Persistent memory and cognitive context for AI. 25 MCP tools. 12ms recall.

Implements a progressive capability model—basic keyword search on raw conversations, semantic search with embeddings, and full domain reconstruction with AI-generated summaries—enabling AI assistants to surface cognitive patterns, unfinished threads, and evolved thinking across fragmented conversation histories from multiple tools (Claude, ChatGPT, Cursor). Operates as an MCP server exposing 25 specialized tools including semantic and keyword search, "prosthetic" functions like `tunnel_state` and `context_recovery` for domain re-entry, and analytics for identifying dormant contexts and thinking trajectories without requiring manual tagging.

About MARM-Systems

Lyellr88/MARM-Systems

Turn AI into a persistent, memory-powered collaborator. Universal MCP Server (supports HTTP, STDIO, and WebSocket) enabling cross-platform AI memory, multi-agent coordination, and context sharing. Built with MARM protocol for structured reasoning that evolves with your work.

# Technical Summary Implements semantic vector-based memory indexing with auto-classification of conversation content (code, decisions, configs) and enables cross-session recall via FastAPI-backed HTTP/STDIO transports that integrate natively with Claude, Gemini, and other MCP-compatible agents. The architecture uses SQLite with WAL mode for persistent storage and connection pooling, exposing 18 MCP tools for granular memory control—including structured session logs, reusable notebooks, and smart context fallbacks when vector similarity alone is insufficient. Designed for production workflows requiring reliable long-term context across multiple AI agents and deployment cycles, with Docker containerization and rate-limiting built-in.

Scores updated daily from GitHub, PyPI, and npm data. How scores work