xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
Provides a unified benchmarking environment supporting multiple VM backends (VMware, VirtualBox, Docker with KVM, AWS) for evaluating multimodal agents on realistic desktop tasks, with parallelized evaluation capabilities reducing runtime to under an hour. The framework captures screen observations and executes agent actions through desktop automation APIs, enabling systematic assessment of end-to-end task completion across diverse applications requiring real OS interactions. Includes 747+ curated tasks spanning web browsing, file management, and productivity software, plus baseline results for vision-language models and agentic frameworks.
2,664 stars. Actively maintained with 31 commits in the last 30 days.
Stars
2,664
Forks
411
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
31
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/xlang-ai/OSWorld"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
swefficiency/swefficiency
Benchmark harness and code for "SWE-fficiency: Can Language Models Optimize Real World...
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems