ghazaleh-mahmoodi/Prompting_LLMs_AS_Explainable_Metrics
Eval4NLP Shared Task on Prompting Large Language Models as Explainable Metrics
10
/ 100
Experimental
No commits in the last 6 months.
Stale 6m
No Package
No Dependents
Maintenance
0 / 25
Adoption
1 / 25
Maturity
9 / 25
Community
0 / 25
Stars
1
Forks
—
Language
Python
License
MIT
Category
Last pushed
Oct 11, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/ghazaleh-mahmoodi/Prompting_LLMs_AS_Explainable_Metrics"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/promptbench
A unified evaluation framework for large language models
70
uptrain-ai/uptrain
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications....
62
gabe-mousa/Apolien
AI Safety Evaluation Library
45
microsoftarchive/promptbench
A unified evaluation framework for large language models
45
babelcloud/LLM-RGB
LLM Reasoning and Generation Benchmark. Evaluate LLMs in complex scenarios systematically.
41