agentic-rag-for-dummies and Agentic-RAG-R1
These are complements: the first provides a foundational, modular framework for learning and building agentic RAG systems, while the second extends that capability with reinforcement learning-based optimization for agent decision-making.
About agentic-rag-for-dummies
GiovanniPasq/agentic-rag-for-dummies
A modular Agentic RAG built with LangGraph — learn Retrieval-Augmented Generation Agents in minutes.
Built on LangGraph's agentic framework, this system implements hierarchical parent-child chunk indexing for precision search paired with context-rich retrieval, conversation memory across turns, and human-in-the-loop query clarification. Multi-agent map-reduce parallelizes sub-query resolution with self-correction and context compression, while supporting pluggable LLM providers (Ollama, OpenAI, Anthropic, Google) and Qdrant vector storage—all orchestrated through observable graph execution with Langfuse integration.
About Agentic-RAG-R1
jiangxinke/Agentic-RAG-R1
Agentic RAG R1 Framework via Reinforcement Learning
Implements GRPO (Generalized Relevance Policy Optimization) to train language models with autonomous tool-calling and multi-step reasoning over retrieval actions, supporting an agent memory stack with backtracking and summarization. Integrates with ArtSearch for Wikipedia retrieval and TCRAG as a rollout generator, while offering LoRA tuning, quantization, and DeepSpeed distributed training (Zero 2/3) to efficiently fine-tune models up to 32B on 2 A100 GPUs. Includes a composite reward model combining accuracy, format, and RAG-specific RAGAS-based scoring for optimizing both answer quality and retrieval effectiveness.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work