AutoRAG and awesome-rag
AutoRAG is a practical evaluation and optimization framework that would benefit from consulting awesome-rag's curated list of RAG techniques and implementations to inform its benchmarking datasets and baseline comparisons.
About AutoRAG
Marker-Inc-Korea/AutoRAG
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
Provides end-to-end RAG pipeline optimization through YAML-driven configuration, encompassing document parsing, semantic chunking, and QA dataset generation with support for multiple parsing/chunking strategies simultaneously. Uses grid-search and metric-driven evaluation across retriever-generator combinations to identify optimal module configurations, with results tracked in a dashboard for deployment-ready pipeline export. Integrates with LlamaIndex, LangChain, and local embedding models, supporting both cloud APIs (OpenAI) and GPU-accelerated inference for custom models.
About awesome-rag
coree/awesome-rag
A curated list of retrieval-augmented generation (RAG) in large language models
Organizes academic papers, tutorials, and open-source tools across RAG methodologies including active retrieval, query rewriting, and in-context learning approaches. Covers architectural variations like black-box retrieval augmentation and hybrid compute strategies, with dynamic citation tracking for each work. Structured to help researchers navigate retrieval integration patterns from pretraining through instruction-tuning and inference-time deployment.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work