autonomous-agentic-rag and agentic-rag

These appear to be **successive iterations by the same author**, where the second project (B) is a refined or production version of the first (A), making them **competitors in the sense that B supersedes A** — users would choose one or the other rather than use both together.

autonomous-agentic-rag
50
Established
agentic-rag
50
Established
Maintenance 6/25
Adoption 10/25
Maturity 13/25
Community 21/25
Maintenance 2/25
Adoption 10/25
Maturity 15/25
Community 23/25
Stars: 125
Forks: 41
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stars: 198
Forks: 67
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About autonomous-agentic-rag

FareedKhan-dev/autonomous-agentic-rag

Self improving agentic rag pipeline

Implements a multi-agent architecture with specialist agents orchestrated via LangGraph that collaboratively generate outputs, evaluated across multiple dimensions (accuracy, feasibility, compliance) by a custom scoring system. An outer evolutionary loop uses diagnostician and SOP architect agents to iteratively refine standard operating procedures based on performance vectors, identifying Pareto-optimal trade-offs. Integrates LangChain/LangGraph for orchestration, Ollama for local LLMs, FAISS and DuckDB for multi-source knowledge indexing (PubMed, FDA guidelines, MIMIC-III clinical data), and LangSmith for observability.

About agentic-rag

FareedKhan-dev/agentic-rag

Agentic RAG to achieve human like reasoning

Implements a multi-stage agentic pipeline with specialized tools (Librarian, Analyst, Scout) coordinated through deliberate reasoning nodes—Gatekeeper for validation, Planner for orchestration, Auditor for self-correction, and Strategist for causal inference. Builds knowledge from structure-aware document parsing, LLM-generated metadata, and hybrid vector/relational stores, then stress-tests robustness through adversarial Red Team challenges and evaluation across retrieval quality, reasoning correctness, and cost metrics.

Scores updated daily from GitHub, PyPI, and npm data. How scores work