mcp-local-rag and mcp-rag-server
About mcp-local-rag
shinpr/mcp-local-rag
Local-first RAG server for developers using MCP. Semantic + keyword search for code and technical docs. Fully private, zero setup.
Embeds documents locally using Ollama-compatible models, storing vectors in LanceDB for fast semantic + keyword hybrid search. Integrates with Cursor, Codex, and Claude Code via MCP protocol, while also offering a CLI interface for standalone indexing and querying without requiring an MCP client.
About mcp-rag-server
kwanLeeFrmVi/mcp-rag-server
mcp-rag-server is a Model Context Protocol (MCP) server that enables Retrieval Augmented Generation (RAG) capabilities. It empowers Large Language Models (LLMs) to answer questions based on your document content by indexing and retrieving relevant information efficiently.
Supports multiple embedding providers (OpenAI, Ollama, Granite, Nomic) with a SQLite-backed vector store, exposing indexing and retrieval operations as MCP tools and resources over stdio. Processes documents in five formats (.txt, .md, .json, .jsonl, .csv) with configurable chunking, enabling seamless integration into any MCP-compatible client or LLM application.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work