paper-qa and docrag

These are competitors in the scientific document QA space, with paper-qa offering a more mature, production-ready implementation focused on citation accuracy while docrag appears to be an earlier-stage alternative approach to the same retrieval-augmented question-answering problem.

paper-qa
77
Verified
docrag
24
Experimental
Maintenance 20/25
Adoption 12/25
Maturity 25/25
Community 20/25
Maintenance 10/25
Adoption 5/25
Maturity 9/25
Community 0/25
Stars: 8,264
Forks: 838
Downloads:
Commits (30d): 7
Language: Python
License: Apache-2.0
Stars: 10
Forks:
Downloads:
Commits (30d): 0
Language: Python
License:
No risk flags
No Package No Dependents

About paper-qa

Future-House/paper-qa

High accuracy RAG for answering questions from scientific documents with citations

Implements agentic RAG with iterative query refinement and LLM-based re-ranking, automatically enriches documents with metadata (citations, journal quality) from Semantic Scholar and Crossref, and supports multiple document formats (PDFs, text, code, Office files) with full-text search via tantivy. Integrates with any LiteLLM-supported model provider and offers local embedding alternatives, enabling deployment without proprietary APIs.

About docrag

nhevers/docrag

document retrieval and QA pipeline

Scores updated daily from GitHub, PyPI, and npm data. How scores work