OpenGenerativeAI/llm-colosseum
Benchmark LLMs by fighting in Street Fighter 3! The new way to evaluate the quality of an LLM
Supports both text-based and vision-based LLM agents through a real-time game loop that evaluates decision-making under time pressure and incomplete information. The framework integrates with multiple LLM providers (OpenAI, Anthropic, Mistral, Ollama) via a unified API abstraction, uses DIAMBRA for Street Fighter III emulation, and ranks models with an ELO scoring system across 546+ completed matches. Agents receive either text descriptions or raw game screenshots, forcing them to balance strategic planning with low-latency responses in a competitive multiplayer environment.
1,467 stars. No commits in the last 6 months.
Stars
1,467
Forks
178
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OpenGenerativeAI/llm-colosseum"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
microsoft/SWE-bench-Live
[NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!