RapidAI/RapidRAG
QA based on local knowledge and LLM.
Implements a modular, Langchain-independent architecture with pluggable components for document ingestion, retrieval, and LLM integration, supporting multiple formats (PDF, DOCX, PPTX, Excel, etc.). Separates inference requirements—only the LLM interface needs external deployment while embedding and retrieval operate on CPU. Includes a web UI and targets flexible knowledge base QA systems without external framework dependencies.
245 stars.
Stars
245
Forks
46
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/RapidAI/RapidRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
benitomartin/substack-newsletters-search-course
Production RAG System Course
liweiphys/layra
LAYRA—an enterprise-ready, out-of-the-box solution—unlocks next-generation intelligent systems...
LHRLAB/HyperGraphRAG
[NeurIPS 2025] Official resources of "HyperGraphRAG: Retrieval-Augmented Generation via...
limanmys/sef
On premise enterprise-grade RAG-powered agentic workflow chatbot platform with multi-provider support
Da1yuqin/EviNoteRAG
Welcome! 😊 This is the official code release of EviNote-RAG, and we’re happy to share it with...