smart-coding-mcp and codealive-mcp
These are **competitors** — both provide semantic code search and context enrichment for AI assistants working with codebases, with the key difference being that smart-coding-mcp uses local AI models while codealive-mcp relies on a GraphRAG service backend.
About smart-coding-mcp
omar-haris/smart-coding-mcp
An extensible Model Context Protocol (MCP-Local-MRL-RAG-AST) server that provides intelligent semantic code search for AI assistants. Built with local AI models, inspired by Cursor's semantic search.
Combines Matryoshka Representation Learning (MRL) embeddings with progressive SQLite caching to enable flexible semantic code search across multiple dimensions (64-768d) without restarting. Integrates directly with MCP-compatible AI assistants including Claude Desktop, Cursor, and VS Code, while also providing real-time package version lookups across 20+ ecosystems and workspace-switching capabilities for monorepo navigation.
About codealive-mcp
CodeAlive-AI/codealive-mcp
The most accurate and comprehensive Context Engine as a service, optimized for large codebases, powered by advanced GraphRAG and accessible via MCP. It enriches the context for AI agents like Codex, Claude Code, Cursor, etc., making them 35% more efficient and up to 84% faster.
Exposes three core tools via MCP protocol—`get_data_sources`, `codebase_search` (semantic search across indexed codebases), and `codebase_consultant` (AI-powered project analysis)—enabling agents to retrieve relevant code context beyond single files. Supports both remote HTTP and Docker stdio transports, with optional Agent Skill installation to teach clients optimal query patterns for the platform. Integrates with 30+ AI coding assistants including Claude, Cursor, Gemini CLI, Continue, and GitHub Copilot through standardized MCP configurations.
Scores updated daily from GitHub, PyPI, and npm data. How scores work