QingFei1/LongRAG
[EMNLP 2024] LongRAG: A Dual-perspective Retrieval-Augmented Generation Paradigm for Long-Context Question Answering
Implements a dual-perspective RAG architecture with separate Extractor and Filter components that decompose long-context understanding into global information retrieval and factual detail extraction. Built on LLaMA-Factory for supervised fine-tuning, it supports modular component composition—extractors and filters can be independently swapped across different LLM generators (ChatGLM3, Llama3, GPT-3.5, GLM-4). Evaluated on multi-hop QA datasets from LongBench with context lengths up to 32k tokens, achieving 52.56 F1 average with GLM-4.
120 stars. No commits in the last 6 months.
Stars
120
Forks
14
Language
Python
License
—
Category
Last pushed
Jan 29, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/QingFei1/LongRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
moodlehq/wiki-rag
An experimental Retrieval-Augmented Generation (RAG) system specialised in ingesting MediaWiki...
macromeer/offline-wikipedia-rag
Free offline AI with complete Wikipedia knowledge - 100% private
AdyTech99/volo
An F/OSS solution combining AI with Wikipedia knowledge via a RAG pipeline
xumozhu/RAG-system
Retrieval-Augmented Generation system: ask a question, retrieve relevant documents, and generate...
MauroAndretta/WikiRag
WikiRag is a Retrieval-Augmented Generation (RAG) system designed for question answering, it...