AIAnytime/rag-evaluator

A library for evaluating Retrieval-Augmented Generation (RAG) systems (The traditional ways).

49
/ 100
Emerging

Computes eleven evaluation metrics including BLEU, ROUGE, BERT Score, METEOR, and MAUVE to assess generated responses across semantic similarity, fluency, readability, and bias dimensions. Provides both a Python API for programmatic evaluation and a Streamlit web interface for interactive analysis. Designed for end-to-end RAG pipeline assessment without requiring external model APIs.

No commits in the last 6 months. Available on PyPI.

Stale 6m
Maintenance 0 / 25
Adoption 12 / 25
Maturity 18 / 25
Community 19 / 25

How are scores calculated?

Stars

42

Forks

18

Language

Python

License

MIT

Last pushed

Aug 10, 2024

Monthly downloads

65

Commits (30d)

0

Dependencies

7

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/AIAnytime/rag-evaluator"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.