sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
Combines text-based and full-duplex voice evaluation across customer service domains (airline, retail, telecom, banking) with configurable tool policies and real-time audio provider integration. Supports knowledge retrieval pipelines with RAG backends, configurable embeddings, and agentic search for the banking domain. Built on a modular orchestrator architecture supporting both turn-based and simultaneous conversation modes via providers like OpenAI and Gemini.
829 stars. Actively maintained with 20 commits in the last 30 days.
Stars
829
Forks
210
Language
Python
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
20
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/sierra-research/tau2-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related tools
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
microsoft/SWE-bench-Live
[NeurIPS 2025 D&B] 🚀 SWE-bench Goes Live!
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)