deep-code-reasoning-mcp and CodeMCP

deep-code-reasoning-mcp
52
Established
CodeMCP
48
Emerging
Maintenance 13/25
Adoption 9/25
Maturity 15/25
Community 15/25
Maintenance 13/25
Adoption 9/25
Maturity 13/25
Community 13/25
Stars: 102
Forks: 15
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
Stars: 71
Forks: 9
Downloads:
Commits (30d): 0
Language: Go
License:
No Package No Dependents
No Package No Dependents

About deep-code-reasoning-mcp

haasonsaas/deep-code-reasoning-mcp

A Model Context Protocol (MCP) server that provides advanced code analysis and reasoning capabilities powered by Google's Gemini AI

Implements an intelligent multi-model escalation strategy where Claude Code handles local refactoring while Gemini's 1M-token context window analyzes large-scale distributed system failures, logs, and traces that exceed Claude's capacity. Features AI-to-AI conversational analysis tools enabling iterative problem-solving between models, plus specialized tools for execution tracing, cross-system impact modeling, and performance bottleneck detection. Integrates directly with Claude Desktop via MCP protocol and Google's Gemini 2.5 Pro API for complementary code reasoning workflows.

About CodeMCP

SimplyLiz/CodeMCP

Code intelligence for AI assistants - MCP server, CLI, and HTTP API with symbol navigation, impact analysis, and architecture mapping

Leverages SCIP-based semantic indexing to build cross-file call graphs and dependency analysis—currently supporting Go (Tier 1), TypeScript/JavaScript/Python (Tier 2), and other languages with varying feature completeness. The tool operates through three interfaces: MCP (Model Context Protocol) for seamless integration with Claude and other AI assistants, a CLI for direct terminal queries, and an HTTP API for CI/CD pipelines and custom tooling. Features include semantic call graph navigation, blast radius calculation with risk scoring, dead code detection, ownership tracking via CODEOWNERS analysis, and automated secret scanning with 26 patterns—all designed to reduce token usage by 83% through smart preset loading.

Scores updated daily from GitHub, PyPI, and npm data. How scores work