laiso/ts-bench
Measure and compare the performance of AI coding agents on TypeScript tasks.
Provides dual benchmark datasets—Exercism TypeScript exercises (v1) and Docker-based SWE-Lancer monorepo tasks (v2)—with a CLI interface supporting multiple AI agents and providers. Built as a Bun-based tool with reproducible baselines, GitHub Actions workflows for both datasets, and structured spec documentation for methodology and agent-specific caveats. Includes web UI generation for SWE-Lancer task visualization and frozen release tags to ensure reproducible comparisons across runs.
210 stars.
Stars
210
Forks
10
Language
TypeScript
License
—
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/laiso/ts-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
agentscope-ai/OpenJudge
OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards