XRAG and RAG-Performance

XRAG
53
Established
RAG-Performance
38
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 16/25
Community 17/25
Maintenance 0/25
Adoption 6/25
Maturity 16/25
Community 16/25
Stars: 120
Forks: 18
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 19
Forks: 6
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About XRAG

DocAILab/XRAG

XRAG: eXamining the Core - Benchmarking Foundational Component Modules in Advanced Retrieval-Augmented Generation

This project helps developers and researchers evaluate different components of Retrieval-Augmented Generation (RAG) systems. It takes various RAG configurations, such as different retrievers, embeddings, and Large Language Models, and outputs performance metrics and visualizations. The primary users are AI/ML engineers and researchers building or optimizing RAG applications.

RAG evaluation LLM benchmarking NLP research AI engineering Information retrieval

About RAG-Performance

SciPhi-AI/RAG-Performance

Measuring RAG solutions throughput and latency

This tool helps RAG (Retrieval-Augmented Generation) solution developers compare the performance of different RAG frameworks when ingesting data. It takes common RAG frameworks and benchmark datasets (like Wikipedia articles or various text/PDF files) as input. It then measures and outputs key performance metrics such as data ingestion time, tokens processed per second, and megabytes processed per second, helping developers choose the most efficient framework for their specific application.

RAG-system-development LLM-application-engineering data-ingestion-benchmarking framework-evaluation system-performance-testing

Scores updated daily from GitHub, PyPI, and npm data. How scores work