paper-qa and Local_Pdf_Chat_RAG
These are competitors in the PDF-RAG space, with A offering a production-ready, citation-aware system for scientific documents while B provides a lightweight, educational implementation emphasizing hybrid retrieval (FAISS + BM25) suitable for learning purposes.
About paper-qa
Future-House/paper-qa
High accuracy RAG for answering questions from scientific documents with citations
Implements agentic RAG with iterative query refinement and LLM-based re-ranking, automatically enriches documents with metadata (citations, journal quality) from Semantic Scholar and Crossref, and supports multiple document formats (PDFs, text, code, Office files) with full-text search via tantivy. Integrates with any LiteLLM-supported model provider and offers local embedding alternatives, enabling deployment without proprietary APIs.
About Local_Pdf_Chat_RAG
weiwill88/Local_Pdf_Chat_RAG
🧠 纯原生 Python 实现的 RAG 框架 | FAISS + BM25 混合检索 | 支持 Ollama / SiliconFlow | 适合新手入门学习
Implements a complete RAG pipeline with modular components decomposing document loading, text chunking, embedding, vector storage (FAISS), and LLM generation into learnable stages. Combines dense vector retrieval with BM25 sparse retrieval, adds cross-encoder reranking and recursive retrieval for improved accuracy, and provides a Gradio interface for interactive learning. Supports pluggable LLM backends via auto-detection of local Ollama or SiliconFlow API endpoints.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work