RAG-system and LongRAG

These are ecosystem siblings where LongRAG represents a specialized research advancement (handling long-context QA at scale) built upon the foundational RAG paradigm that the basic RAG-system implements.

RAG-system
30
Emerging
LongRAG
32
Emerging
Maintenance 2/25
Adoption 4/25
Maturity 9/25
Community 15/25
Maintenance 0/25
Adoption 10/25
Maturity 8/25
Community 14/25
Stars: 8
Forks: 4
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 120
Forks: 14
Downloads:
Commits (30d): 0
Language: Python
License:
Stale 6m No Package No Dependents
No License Stale 6m No Package No Dependents

About RAG-system

xumozhu/RAG-system

Retrieval-Augmented Generation system: ask a question, retrieve relevant documents, and generate precise answers. RAG demo: document retrieval + LLM answering

About LongRAG

QingFei1/LongRAG

[EMNLP 2024] LongRAG: A Dual-perspective Retrieval-Augmented Generation Paradigm for Long-Context Question Answering

Implements a dual-perspective RAG architecture with separate Extractor and Filter components that decompose long-context understanding into global information retrieval and factual detail extraction. Built on LLaMA-Factory for supervised fine-tuning, it supports modular component composition—extractors and filters can be independently swapped across different LLM generators (ChatGLM3, Llama3, GPT-3.5, GLM-4). Evaluated on multi-hop QA datasets from LongBench with context lengths up to 32k tokens, achieving 52.56 F1 average with GLM-4.

Scores updated daily from GitHub, PyPI, and npm data. How scores work