mcp-rag-server and mcp-ragchat

mcp-rag-server
38
Emerging
mcp-ragchat
36
Emerging
Maintenance 2/25
Adoption 7/25
Maturity 16/25
Community 13/25
Maintenance 10/25
Adoption 4/25
Maturity 10/25
Community 12/25
Stars: 25
Forks: 4
Downloads:
Commits (30d): 0
Language: TypeScript
License: MIT
Stars: 1
Forks: 1
Downloads: 14
Commits (30d): 0
Language: TypeScript
License:
Stale 6m No Package No Dependents
No License No Dependents

About mcp-rag-server

kwanLeeFrmVi/mcp-rag-server

mcp-rag-server is a Model Context Protocol (MCP) server that enables Retrieval Augmented Generation (RAG) capabilities. It empowers Large Language Models (LLMs) to answer questions based on your document content by indexing and retrieving relevant information efficiently.

Supports multiple embedding providers (OpenAI, Ollama, Granite, Nomic) with a SQLite-backed vector store, exposing indexing and retrieval operations as MCP tools and resources over stdio. Processes documents in five formats (.txt, .md, .json, .jsonl, .csv) with configurable chunking, enabling seamless integration into any MCP-compatible client or LLM application.

About mcp-ragchat

gogabrielordonez/mcp-ragchat

MCP server that adds RAG-powered AI chat to any website. One command from Claude Code. Local vector store, multi-provider LLM (OpenAI/Anthropic/Gemini). Zero cloud dependency.

Scores updated daily from GitHub, PyPI, and npm data. How scores work