codebase-memory-mcp and deep-code-reasoning-mcp

These are complementary tools: codebase-memory-mcp provides efficient code indexing and retrieval infrastructure, while deep-code-reasoning-mcp layers AI-powered semantic analysis on top, allowing them to be used together where one feeds indexed code context to enhance the other's reasoning capabilities.

codebase-memory-mcp
65
Established
deep-code-reasoning-mcp
52
Established
Maintenance 25/25
Adoption 10/25
Maturity 11/25
Community 19/25
Maintenance 13/25
Adoption 9/25
Maturity 15/25
Community 15/25
Stars: 585
Forks: 65
Downloads:
Commits (30d): 317
Language: C
License: MIT
Stars: 102
Forks: 15
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
No Package No Dependents
No Package No Dependents

About codebase-memory-mcp

DeusData/codebase-memory-mcp

MCP server that indexes your codebase into a persistent knowledge graph. 64 languages, sub-ms queries, 99% fewer tokens than grep. Single Go binary, no Docker, no API keys.

Builds an AST-based knowledge graph using tree-sitter parsers with optional LSP-style type resolution for Go, C, and C++, persisting the graph to in-memory SQLite for sub-millisecond structural queries. Indexes codebases at extreme speed through RAM-first pipeline with LZ4 compression and fused Aho-Corasick pattern matching, completing the Linux kernel in 3 minutes. Implements the Model Context Protocol with 14 tools including architecture analysis, call graph tracing, impact mapping from git diffs, and Cypher-like graph queries—integrating with 10 coding agents (Claude Code, Zed, Gemini CLI, and others) through automatic MCP configuration on install.

About deep-code-reasoning-mcp

haasonsaas/deep-code-reasoning-mcp

A Model Context Protocol (MCP) server that provides advanced code analysis and reasoning capabilities powered by Google's Gemini AI

Implements an intelligent multi-model escalation strategy where Claude Code handles local refactoring while Gemini's 1M-token context window analyzes large-scale distributed system failures, logs, and traces that exceed Claude's capacity. Features AI-to-AI conversational analysis tools enabling iterative problem-solving between models, plus specialized tools for execution tracing, cross-system impact modeling, and performance bottleneck detection. Integrates directly with Claude Desktop via MCP protocol and Google's Gemini 2.5 Pro API for complementary code reasoning workflows.

Scores updated daily from GitHub, PyPI, and npm data. How scores work