AutoRAG and RAG-FiT
AutoRAG provides systematic evaluation and optimization of RAG pipelines, while RAG-FiT enhances the language model component itself through fine-tuning—making them complementary tools that address different layers of RAG system improvement.
About AutoRAG
Marker-Inc-Korea/AutoRAG
AutoRAG: An Open-Source Framework for Retrieval-Augmented Generation (RAG) Evaluation & Optimization with AutoML-Style Automation
Provides end-to-end RAG pipeline optimization through YAML-driven configuration, encompassing document parsing, semantic chunking, and QA dataset generation with support for multiple parsing/chunking strategies simultaneously. Uses grid-search and metric-driven evaluation across retriever-generator combinations to identify optimal module configurations, with results tracked in a dashboard for deployment-ready pipeline export. Integrates with LlamaIndex, LangChain, and local embedding models, supporting both cloud APIs (OpenAI) and GPU-accelerated inference for custom models.
About RAG-FiT
IntelLabs/RAG-FiT
Framework for enhancing LLMs for RAG tasks using fine-tuning.
Provides end-to-end RAG dataset creation, PEFT-based training via TRL, and RAG-specific evaluation metrics (EM, F1, ROUGE, BERTScore, RAGAS). Built on Hydra for configuration-driven workflows, it supports retrieval integration with frameworks like Haystack and integrates with HuggingFace Hub for model distribution. Four modular components handle dataset processing, parameter-efficient fine-tuning, inference, and multi-level evaluation metrics that operate on retrieval metadata and citations.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work