octocode-mcp and Axon.MCP.Server

These are **competitors** — both provide semantic code search and indexing capabilities to enable AI assistants to understand and query codebases, with octocode-mcp offering broader repository access while Axon.MCP.Server focuses on integration with specific IDEs like Cursor.

octocode-mcp
73
Verified
Axon.MCP.Server
49
Emerging
Maintenance 23/25
Adoption 10/25
Maturity 24/25
Community 16/25
Maintenance 10/25
Adoption 10/25
Maturity 13/25
Community 16/25
Stars: 746
Forks: 58
Downloads:
Commits (30d): 28
Language: TypeScript
License: MIT
Stars: 158
Forks: 22
Downloads:
Commits (30d): 0
Language: Python
License:
No Dependents
No Package No Dependents

About octocode-mcp

bgauryy/octocode-mcp

MCP server for semantic code research and context generation on real-time using LLM patterns | Search naturally across public & private repos based on your permissions | Transform any accessible codebase/s into AI-optimized knowledge on simple and complex flows | Find real implementations and live docs from anywhere

Implements MCP (Model Context Protocol) with LSP-powered code intelligence (Go to Definition, Find References, Call Hierarchy) across GitHub, GitLab, and local codebases, enabling compiler-level understanding without parsing. Provides modular Agent Skills—including multi-phase research with session persistence, AST-driven code analysis, dependency graphing, and PR review across seven domains—composable via CLI or direct integration into Claude/Cursor.

About Axon.MCP.Server

ali-kamali/Axon.MCP.Server

Transform your codebase into an intelligent knowledge base for AI-powered development with Cursor IDE, Google AntiGravity, and MCP-enabled assistants

Here's a technical summary for the developer directory: Provides 12 MCP-exposed tools (semantic search, call graphs, inheritance trees, API discovery) powered by a hybrid Python/C# analysis engine combining Tree-sitter for multi-language parsing and Roslyn for compiler-grade C# semantic analysis. Built on FastAPI + Celery microarchitecture with PostgreSQL + pgvector for vector embeddings, achieving <500ms p95 latency across 10,000+ files while supporting Entity Framework introspection and auto-generated architecture visualization.

Scores updated daily from GitHub, PyPI, and npm data. How scores work