RapidAI/RapidRAG

QA based on local knowledge and LLM.

50
/ 100
Established

Implements a modular, Langchain-independent architecture with pluggable components for document ingestion, retrieval, and LLM integration, supporting multiple formats (PDF, DOCX, PPTX, Excel, etc.). Separates inference requirements—only the LLM interface needs external deployment while embedding and retrieval operate on CPU. Includes a web UI and targets flexible knowledge base QA systems without external framework dependencies.

245 stars.

No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 21 / 25

How are scores calculated?

Stars

245

Forks

46

Language

Python

License

Apache-2.0

Category

rag-applications

Last pushed

Jan 16, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/RapidAI/RapidRAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.