avnlp/rag-model-training

Training code for advanced RAG techniques - Adaptive-RAG, Corrective RAG, RQ-RAG, Self-RAG, Agentic RAG, and ReZero. Reproduces paper methodologies to fine-tune LLMs via SFT and GRPO for adaptive retrieval, corrective evaluation, query refinement, self-reflection, and agentic search behaviors.

36
/ 100
Emerging

Implements modular training pipelines for six distinct RAG architectures using SFT and reinforcement learning (GRPO), with specialized components like query complexity classifiers, document relevance evaluators, and multi-phase reflection critics. Built on Llama and T5 model families with domain-specific datasets (financial QA, multi-hop reasoning, web search augmentation), enabling fine-grained control over retrieval depth, document filtering, and agentic decision-making. Provides reproducible implementations of peer-reviewed methodologies with configurable training workflows for different RAG pipeline stages.

No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 9 / 25
Community 13 / 25

How are scores calculated?

Stars

6

Forks

2

Language

Python

License

MIT

Last pushed

Mar 05, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/avnlp/rag-model-training"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.