enhanced-mcp-memory and MARM-Systems

enhanced-mcp-memory
55
Established
MARM-Systems
54
Established
Maintenance 2/25
Adoption 12/25
Maturity 24/25
Community 17/25
Maintenance 10/25
Adoption 10/25
Maturity 15/25
Community 19/25
Stars: 30
Forks: 8
Downloads: 208
Commits (30d): 0
Language: Python
License: MIT
Stars: 251
Forks: 42
Downloads: —
Commits (30d): 0
Language: Python
License: MIT
Stale 6m
No Package No Dependents

About enhanced-mcp-memory

cbunting99/enhanced-mcp-memory

An enhanced MCP (Model Context Protocol) server for intelligent memory and task management, designed for AI assistants and development workflows. Features semantic search, automatic task extraction, knowledge graphs, and comprehensive project management.

Implements a 5-stage sequential thinking engine with token optimization (30-70% compression) and automatic context summarization for conversation continuity, built on FastMCP with SQLite persistence. Automatically detects project conventions (OS, package managers, build tools, runtime types) and learns command patterns to correct AI suggestions—particularly useful for cross-platform development. Exposes 20+ MCP tools including thinking chains, task decomposition, and knowledge graph relationships that link memories to code file paths.

About MARM-Systems

Lyellr88/MARM-Systems

Turn AI into a persistent, memory-powered collaborator. Universal MCP Server (supports HTTP, STDIO, and WebSocket) enabling cross-platform AI memory, multi-agent coordination, and context sharing. Built with MARM protocol for structured reasoning that evolves with your work.

# Technical Summary Implements semantic vector-based memory indexing with auto-classification of conversation content (code, decisions, configs) and enables cross-session recall via FastAPI-backed HTTP/STDIO transports that integrate natively with Claude, Gemini, and other MCP-compatible agents. The architecture uses SQLite with WAL mode for persistent storage and connection pooling, exposing 18 MCP tools for granular memory control—including structured session logs, reusable notebooks, and smart context fallbacks when vector similarity alone is insufficient. Designed for production workflows requiring reliable long-term context across multiple AI agents and deployment cycles, with Docker containerization and rate-limiting built-in.

Scores updated daily from GitHub, PyPI, and npm data. How scores work