MrTsepa/autoevolve
AI agent evolving strategies through automated self-play overnight. Generic framework with GEPA-inspired feedback loop and Elo tracking.
Separates mutation (LLM-driven code iteration), evaluation (head-to-head benchmarking), and rating (Bradley-Terry model for order-independent skill estimation) into pluggable components, enabling domain-specific arenas from game bots to prompt optimization. Integrates with Claude Code as a skill for autonomous overnight experiments, or runs standalone via Python CLI with `matches.json` persistence and Pareto front selection for choosing high-potential parent candidates.
Stars
3
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/MrTsepa/autoevolve"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
arielshad/balagan-agent
Chaos Engineering for AI Agents
Clawland-AI/Geneclaw
Self-evolving AI agent framework with 5-layer safety gatekeeper. Agents observe failures,...
evoplex/evoplex
Evoplex is a fast, robust and extensible platform for developing agent-based models and...
selinayfilizp/decision
The missing layer in the AI agent stack. Teach your agent how you decide. Generates a portable...
SharedIntellect/quorum
Evidence-grounded quality validation for AI agent outputs. Rubric-driven, multi-critic, open source.