Awesome-GraphRAG and Awesome-RAG

These are complementary resources that address different architectural approaches to RAG: one focuses specifically on graph-based retrieval methods while the other covers RAG development broadly, so users would consult both depending on whether they need general RAG techniques or specifically graph-enhanced retrieval strategies.

Awesome-GraphRAG
55
Established
Awesome-RAG
39
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 19/25
Maintenance 10/25
Adoption 10/25
Maturity 8/25
Community 11/25
Stars: 2,181
Forks: 183
Downloads:
Commits (30d): 0
Language:
License: MIT
Stars: 439
Forks: 19
Downloads:
Commits (30d): 0
Language:
License:
No Package No Dependents
No License No Package No Dependents

About Awesome-GraphRAG

DEEP-PolyU/Awesome-GraphRAG

Awesome-GraphRAG: A curated list of resources (surveys, papers, benchmarks, and opensource projects) on graph-based retrieval-augmented generation.

Organizes GraphRAG research into three core dimensions—knowledge organization (graph construction via entity extraction or hierarchical indexing), retrieval mechanisms (semantic similarity, logical reasoning, GNN-based, and LLM-based approaches), and knowledge integration (fine-tuning vs. in-context learning)—with accompanying benchmarks and open-source implementations. The repository maps distinct GraphRAG paradigms including knowledge-based approaches that extract entity-relation graphs from raw text and index-based approaches that build topic hierarchies, contrasting both against traditional chunk-based RAG. Covers integration patterns across LLM frameworks and provides curated links to peer-reviewed papers, established benchmarks like GraphRAG-Bench, and reference implementations including LinearRAG and LogicRAG.

About Awesome-RAG

liunian-Jay/Awesome-RAG

💡 Awesome RAG: A resource of Retrieval-Augmented Generation (RAG) for LLMs, focusing on the development of technology.

Curated repository tracking peer-reviewed RAG research across top-tier conferences (NeurIPS, ACL, ICML, ICLR) and recent arxiv papers, organized by publication date and venue. Includes comprehensive evaluation datasets (HotpotQA, TriviaQA, ASQA, etc.) for benchmarking retrieval and generation systems. Actively maintains associated frameworks and tools like LightRAG, AgenticRAG-RL, and QAgent, with community contributions encouraged for staying current with rapidly evolving RAG methodologies.

Scores updated daily from GitHub, PyPI, and npm data. How scores work