serhiismetanskyi/llm-output-evaluation-with-deepeval
DeepEval LLM quality evaluation tests with LLM-as-a-judge
14
/ 100
Experimental
No License
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
1 / 25
Community
0 / 25
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/serhiismetanskyi/llm-output-evaluation-with-deepeval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
90
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
72
EuroEval/EuroEval
The robust European language model benchmark.
70
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
70
evalplus/evalplus
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024
70