gc-qa-rag and lm-rag-techniques
These are complements: GrapeCity's pre-generated QA pair approach and NamaWho's advanced retrieval techniques (Rank Fusion, Cascading Retrieval) address different layers of RAG pipelines and could be combined for enhanced question-answering performance.
About gc-qa-rag
GrapeCity-AI/gc-qa-rag
A RAG (Retrieval-Augmented Generation) solution Based on Advanced Pre-generated QA Pairs. 基于高级 QA 问答对预生成的 RAG 知识库解决方案
Leverages a two-stage memory-focused approach for QA generation that dynamically adapts to document length—short documents use sentence-level precision, while long documents employ a "remember-then-focus" dialogue pattern to capture comprehensive coverage without hallucination. Beyond core QA pairs, it generates summaries, expanded answers, and question variants stored in a vector database (Qdrant), enhancing retrieval diversity and multi-turn dialogue capabilities. Built as a modular ETL-Retrieval-Generation stack with production-grade orchestration supporting Docker deployment, hybrid search with RRF ranking, and integration with major LLM APIs (OpenAI, Alibaba Bailian, etc.).
About lm-rag-techniques
NamaWho/lm-rag-techniques
Question-Answering (QA) system powered by Retrieval-Augmented Generation (RAG). The system leverages advanced methods such as Rank Fusion and Cascading Retrieval for optimized document retrieval and contextual QA generation.
Scores updated daily from GitHub, PyPI, and npm data. How scores work