RAG-system and LongRAG
These are ecosystem siblings where LongRAG represents a specialized research advancement (handling long-context QA at scale) built upon the foundational RAG paradigm that the basic RAG-system implements.
About RAG-system
xumozhu/RAG-system
Retrieval-Augmented Generation system: ask a question, retrieve relevant documents, and generate precise answers. RAG demo: document retrieval + LLM answering
About LongRAG
QingFei1/LongRAG
[EMNLP 2024] LongRAG: A Dual-perspective Retrieval-Augmented Generation Paradigm for Long-Context Question Answering
Implements a dual-perspective RAG architecture with separate Extractor and Filter components that decompose long-context understanding into global information retrieval and factual detail extraction. Built on LLaMA-Factory for supervised fine-tuning, it supports modular component composition—extractors and filters can be independently swapped across different LLM generators (ChatGLM3, Llama3, GPT-3.5, GLM-4). Evaluated on multi-hop QA datasets from LongBench with context lengths up to 32k tokens, achieving 52.56 F1 average with GLM-4.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work