Agentic-RAG-R1 and agentic-rag
These are **competitors** — both implement agentic RAG systems with reinforcement learning approaches to improve reasoning quality, targeting the same use case of enhancing retrieval-augmented generation with agent-like decision-making.
About Agentic-RAG-R1
jiangxinke/Agentic-RAG-R1
Agentic RAG R1 Framework via Reinforcement Learning
Implements GRPO (Generalized Relevance Policy Optimization) to train language models with autonomous tool-calling and multi-step reasoning over retrieval actions, supporting an agent memory stack with backtracking and summarization. Integrates with ArtSearch for Wikipedia retrieval and TCRAG as a rollout generator, while offering LoRA tuning, quantization, and DeepSpeed distributed training (Zero 2/3) to efficiently fine-tune models up to 32B on 2 A100 GPUs. Includes a composite reward model combining accuracy, format, and RAG-specific RAGAS-based scoring for optimizing both answer quality and retrieval effectiveness.
About agentic-rag
FareedKhan-dev/agentic-rag
Agentic RAG to achieve human like reasoning
Implements a multi-stage agentic pipeline with specialized tools (Librarian, Analyst, Scout) coordinated through deliberate reasoning nodes—Gatekeeper for validation, Planner for orchestration, Auditor for self-correction, and Strategist for causal inference. Builds knowledge from structure-aware document parsing, LLM-generated metadata, and hybrid vector/relational stores, then stress-tests robustness through adversarial Red Team challenges and evaluation across retrieval quality, reasoning correctness, and cost metrics.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work