zhangxjohn/LLM-Agent-Benchmark-List
A banchmark list for evaluation of large language models.
39
/ 100
Emerging
160 stars.
No Package
No Dependents
Maintenance
10 / 25
Adoption
10 / 25
Maturity
9 / 25
Community
10 / 25
Stars
160
Forks
9
Language
—
License
Apache-2.0
Category
Last pushed
Feb 26, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/zhangxjohn/LLM-Agent-Benchmark-List"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
72
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
64
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
64
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
55
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
51