ltzheng/agent-studio
[ICLR 2025] A trinity of environments, tools, and benchmarks for general virtual agents
Provides generic video and action interfaces (GUI/API) across desktop applications and terminals, with auto-evaluation capabilities and language feedback. Includes three decomposed datasets—GroundUI, IDMBench, and CriticBench—targeting specific agent abilities like UI grounding, video learning, and success detection. Built-in annotation tools enable creation of structured benchmark tasks and video-action trajectories for training and evaluation.
229 stars. No commits in the last 6 months.
Stars
229
Forks
30
Language
Python
License
AGPL-3.0
Category
Last pushed
Jun 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/ltzheng/agent-studio"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
agentscope-ai/OpenJudge
OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards