AutoRAG and awesome-rag

AutoRAG is a practical evaluation and optimization framework that would benefit from consulting awesome-rag's curated list of RAG techniques and implementations to inform its benchmarking datasets and baseline comparisons.

AutoRAG
70
Verified
awesome-rag
46
Emerging
Maintenance 16/25
Adoption 10/25
Maturity 25/25
Community 19/25
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 14/25
Stars: 4,609
Forks: 381
Downloads:
Commits (30d): 5
Language: Python
License: Apache-2.0
Stars: 374
Forks: 31
Downloads:
Commits (30d): 0
Language:
License: CC0-1.0
No risk flags
No Package No Dependents

About AutoRAG

Marker-Inc-Korea/AutoRAG

AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation

Provides end-to-end RAG pipeline optimization through YAML-driven configuration, encompassing document parsing, semantic chunking, and QA dataset generation with support for multiple parsing/chunking strategies simultaneously. Uses grid-search and metric-driven evaluation across retriever-generator combinations to identify optimal module configurations, with results tracked in a dashboard for deployment-ready pipeline export. Integrates with LlamaIndex, LangChain, and local embedding models, supporting both cloud APIs (OpenAI) and GPU-accelerated inference for custom models.

About awesome-rag

coree/awesome-rag

A curated list of retrieval-augmented generation (RAG) in large language models

Organizes academic papers, tutorials, and open-source tools across RAG methodologies including active retrieval, query rewriting, and in-context learning approaches. Covers architectural variations like black-box retrieval augmentation and hybrid compute strategies, with dynamic citation tracking for each work. Structured to help researchers navigate retrieval integration patterns from pretraining through instruction-tuning and inference-time deployment.

Scores updated daily from GitHub, PyPI, and npm data. How scores work