JimlAspen/llm-rag-project
A modular, reproducible Retrieval‑Augmented Generation (RAG) and Grounded AI pipeline built from scratch. Includes ingestion, token‑based chunking, YAML‑driven configs, and end‑to‑end testing. Part of a multi‑week project demonstrating production‑grade LLM system design, source governance, and grounded reasoning.
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Apr 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/JimlAspen/llm-rag-project"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
sanidavidanagama/DSGP15_Project
AI-powered child developmental assessment platform that analyzes children's drawings using...
VesperArch/GopherDoc
High-throughput RAG ingestion engine in pure Go — 1,210 MB/s, 2 MB heap, zero dependencies.
skerk001/clinical-rag
RAG system for clinical question answering over 220 discharge summaries with hallucination...
TemidireAdesiji/docmind
Document QA engine powered by agentic reasoning, hierarchical chunking, and hybrid vector...
payalcs077/rag-powered-document-qa-system
FastAPI-based RAG document Q&A system with chunked retrieval, grounded answers, CI, and Docker support.