AutoRAG and Awesome-LLM-RAG
AutoRAG is a practical framework for building and optimizing RAG systems, while Awesome-LLM-RAG is a curated knowledge resource documenting RAG techniques and approaches—they are complementary, with the latter helping practitioners discover methods that the former helps them implement and evaluate.
About AutoRAG
Marker-Inc-Korea/AutoRAG
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
Provides end-to-end RAG pipeline optimization through YAML-driven configuration, encompassing document parsing, semantic chunking, and QA dataset generation with support for multiple parsing/chunking strategies simultaneously. Uses grid-search and metric-driven evaluation across retriever-generator combinations to identify optimal module configurations, with results tracked in a dashboard for deployment-ready pipeline export. Integrates with LlamaIndex, LangChain, and local embedding models, supporting both cloud APIs (OpenAI) and GPU-accelerated inference for custom models.
About Awesome-LLM-RAG
jxzhangjhu/Awesome-LLM-RAG
Awesome-LLM-RAG: a curated list of advanced retrieval augmented generation (RAG) in Large Language Models
Organizes research across 10+ RAG subcategories (instruction tuning, embeddings, evaluation, optimization) with direct links to papers and implementations, enabling researchers to systematically explore advances beyond basic retrieval-generation pipelines. Covers the complete RAG stack from retrieval mechanics and in-context learning strategies to specialized techniques like graph-based RAG and adaptive routing, alongside curated workshops and foundational texts for practical implementation guidance.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work