amazon-science/auto-rag-eval
Code repo for the ICML 2024 paper "Automated Evaluation of Retrieval-Augmented Language Models with Task-Specific Exam Generation"
41
/ 100
Emerging
No commits in the last 6 months.
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
9 / 25
Maturity
16 / 25
Community
16 / 25
Stars
86
Forks
13
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/amazon-science/auto-rag-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ibm-self-serve-assets/JudgeIt-LLM-as-a-Judge
Automation Framework using LLM-as-a-judge to evaluate of Agentic AI, RAG, Text2SQL at scale;...
44
explore-de/rage4j
Evaluate your LLM based Java Apps
28
nl4opt/ORQA
[AAAI 2025] ORQA is a new QA benchmark designed to assess the reasoning capabilities of LLMs in...
23