evo-eval/evoeval
EvoEval: Evolving Coding Benchmarks via LLM
50
/ 100
Established
No commits in the last 6 months. Available on PyPI.
Stale 6m
Maintenance
0 / 25
Adoption
9 / 25
Maturity
25 / 25
Community
16 / 25
Stars
81
Forks
13
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 06, 2024
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/evo-eval/evoeval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
90
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
72
EuroEval/EuroEval
The robust European language model benchmark.
70
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
70
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents
70