DocAILab/XRAG

XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced Retrieval-Augmented Generation

56
/ 100
Established

Provides modular benchmarking for RAG systems through pluggable retrievers (vector, BM25, hybrid, tree-based), embeddings, and LLMs with comprehensive evaluation metrics spanning traditional (F1, NDCG), LLM-based (faithfulness, correctness), and deep evaluation dimensions. Implements agentic RAG workflows via five orchestrator types (sequential, conditional, iterative, parallel, hybrid) and integrates with OpenAI APIs, local models (Qwen, LLaMA via Ollama), and vector databases for end-to-end evaluation pipelines.

120 stars.

No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

120

Forks

18

Language

Python

License

Apache-2.0

Last pushed

Mar 07, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/DocAILab/XRAG"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.