paper-qa and docrag
These are competitors in the scientific document QA space, with paper-qa offering a more mature, production-ready implementation focused on citation accuracy while docrag appears to be an earlier-stage alternative approach to the same retrieval-augmented question-answering problem.
About paper-qa
Future-House/paper-qa
High accuracy RAG for answering questions from scientific documents with citations
Implements agentic RAG with iterative query refinement and LLM-based re-ranking, automatically enriches documents with metadata (citations, journal quality) from Semantic Scholar and Crossref, and supports multiple document formats (PDFs, text, code, Office files) with full-text search via tantivy. Integrates with any LiteLLM-supported model provider and offers local embedding alternatives, enabling deployment without proprietary APIs.
About docrag
nhevers/docrag
document retrieval and QA pipeline
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work