chris-santiago/ml-debate-lab
Structured ML hypothesis investigation for Claude Code — adversarial critique, empirical testing, peer review, and coherence audit. Available as a Claude Code plugin. Benchmark for context-isolated multi-agent debate on ML methodology tasks — where the correct answer is sometimes “this work is fine.”
Stars
—
Forks
—
Language
Python
License
MIT
Last pushed
Apr 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/chris-santiago/ml-debate-lab"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fastapi/fastapi
FastAPI framework, high performance, easy to learn, fast to code, ready for production
scikit-learn/scikit-learn
scikit-learn: machine learning in Python
probabl-ai/skore
Track your Data Science. Skore's open-source Python library accelerates ML model development...
Farama-Foundation/Gymnasium
An API standard for single-agent reinforcement learning environments, with popular reference...
WMD-group/SMACT
Python package to aid materials design and informatics