justindobbs/Tracecore
Deterministic runtime for agent evaluation
Provides reproducible agent execution through frozen inputs (agent, task, seed, budgets) and enforces hard step/tool-call limits with deterministic validation that emits binary verdicts. Run artifacts conform to a standardized JSON schema for offline validation, and the reference Python runtime ships with a FastAPI dashboard, CLI, and spec bundles that other language implementations can reference for compliance.
Available on PyPI.
Stars
7
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 10, 2026
Monthly downloads
949
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/justindobbs/Tracecore"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
StonyBrookNLP/appworld
🌍 AppWorld: A Controllable World of Apps and People for Benchmarking Function Calling and...
qualifire-dev/rogue
AI Agent Evaluator & Red Team Platform
future-agi/ai-evaluation
Evaluation Framework for all your AI related Workflows
microsoft/WindowsAgentArena
Windows Agent Arena (WAA) 🪟 is a scalable OS platform for testing and benchmarking of...
agentscope-ai/OpenJudge
OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards