MSKazemi/ExaBench-QA
ExaBench-QA is a benchmark and dataset for evaluating role-aware, LLM-based AI agents for High-Performance Computing (HPC). Includes a corpus of queries, taxonomies, and a JSON schema.
Stars
—
Forks
—
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Nov 04, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/MSKazemi/ExaBench-QA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Tongyi-MAI/MobileWorld
Benchmarking Autonomous Mobile Agents in Agent-User Interactive and MCP-Augmented Environments
OSU-NLP-Group/ScienceAgentBench
[ICLR'25] ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven...
ml-dev-bench/ml-dev-bench
ML-Dev-Bench is a benchmark for evaluating AI agents against various ML development tasks.
michaelabrt/clarte-benchmark
Paired A/B benchmark suite for Clarté - measures how dependency-graph intelligence affects AI...
zzhiyuann/agent-bench
Benchmarking framework for AI agents — pytest for AI agents. Define tasks in YAML, run against...