Llm Evaluation Benchmarking AI Agents
There are 3 llm evaluation benchmarking agents tracked. 1 score above 50 (established tier). The highest-rated is strands-agents/evals at 56/100 with 82 stars.
Get all 3 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=agents&subcategory=llm-evaluation-benchmarking&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Agent | Score | Tier |
|---|---|---|---|
| 1 |
strands-agents/evals
A comprehensive evaluation framework for AI agents and LLM applications. |
|
Established |
| 2 |
usestrix/benchmarks
Evaluation harness for Strix agent |
|
Emerging |
| 3 |
eve-mas/eve-parity
Equilibrium Verification Environment (EVE) is a formal verification tool for... |
|
Emerging |