Otman404/local-rag-llamaindex

Local llamaindex RAG to assist researchers quickly navigate research papers

39
/ 100
Emerging

Implements a complete retrieval-augmented generation pipeline using LlamaIndex for document chunking and embedding, Qdrant for vector storage, and Ollama for local LLM inference—all orchestrated via FastAPI. Automatically downloads research papers from arXiv, indexes them into the vector database, retrieves relevant chunks for user queries, and generates grounded answers with source citations. Fully containerized with Docker Compose for reproducible offline operation without API dependencies.

133 stars. No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

133

Forks

23

Language

Python

License

Last pushed

May 16, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/Otman404/local-rag-llamaindex"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.