vectara/open-rag-eval

RAG evaluation without the need for "golden answers"

59
/ 100
Established

Implements reference-free evaluation metrics (UMBRELA, AutoNuggetizer) based on research from UWaterloo, eliminating the need for golden answers while supporting optional reference-based metrics when available. Provides modular connectors for Vectara, LlamaIndex, and LangChain RAG platforms, with built-in TREC-RAG benchmark metrics and per-query scoring for detailed analysis. Uses LLM judges and open-source hallucination detection models (HHEM) to assess retrieval quality and factual consistency across RAG pipelines.

347 stars and 645 monthly downloads. Available on PyPI.

Maintenance 6 / 25
Adoption 16 / 25
Maturity 25 / 25
Community 12 / 25

How are scores calculated?

Stars

347

Forks

21

Language

Python

License

Apache-2.0

Last pushed

Dec 15, 2025

Monthly downloads

645

Commits (30d)

0

Dependencies

28

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/rag/vectara/open-rag-eval"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.