AutoRAG and Awesome-LLM-RAG

AutoRAG is a practical framework for building and optimizing RAG systems, while Awesome-LLM-RAG is a curated knowledge resource documenting RAG techniques and approaches—they are complementary, with the latter helping practitioners discover methods that the former helps them implement and evaluate.

AutoRAG
70
Verified
Awesome-LLM-RAG
47
Emerging
Maintenance 16/25
Adoption 10/25
Maturity 25/25
Community 19/25
Maintenance 13/25
Adoption 10/25
Maturity 8/25
Community 16/25
Stars: 4,609
Forks: 381
Downloads:
Commits (30d): 5
Language: Python
License: Apache-2.0
Stars: 1,312
Forks: 74
Downloads:
Commits (30d): 4
Language:
License:
No risk flags
No License No Package No Dependents

About AutoRAG

Marker-Inc-Korea/AutoRAG

AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation

Provides end-to-end RAG pipeline optimization through YAML-driven configuration, encompassing document parsing, semantic chunking, and QA dataset generation with support for multiple parsing/chunking strategies simultaneously. Uses grid-search and metric-driven evaluation across retriever-generator combinations to identify optimal module configurations, with results tracked in a dashboard for deployment-ready pipeline export. Integrates with LlamaIndex, LangChain, and local embedding models, supporting both cloud APIs (OpenAI) and GPU-accelerated inference for custom models.

About Awesome-LLM-RAG

jxzhangjhu/Awesome-LLM-RAG

Awesome-LLM-RAG: a curated list of advanced retrieval augmented generation (RAG) in Large Language Models

Organizes research across 10+ RAG subcategories (instruction tuning, embeddings, evaluation, optimization) with direct links to papers and implementations, enabling researchers to systematically explore advances beyond basic retrieval-generation pipelines. Covers the complete RAG stack from retrieval mechanics and in-context learning strategies to specialized techniques like graph-based RAG and adaptive routing, alongside curated workshops and foundational texts for practical implementation guidance.

Scores updated daily from GitHub, PyPI, and npm data. How scores work