meistrari/prompts-royale
Automatically create prompts and make them fight each other to know which is the best
Implements Monte Carlo-based matchmaking paired with ELO rating updates to efficiently rank prompt candidates across test cases, automatically generating both prompts and evaluation scenarios from a task description. The system models each prompt as a normal distribution, strategically selecting battle pairings proportional to their probability of being optimal, then updates scores through an LLM judge evaluating candidate responses. Runs entirely client-side in the browser with local storage, compatible with any LLM API.
603 stars. No commits in the last 6 months.
Stars
603
Forks
71
Language
Vue
License
—
Category
Last pushed
Sep 01, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/meistrari/prompts-royale"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/promptflow
Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.
pezzolabs/pezzo
🕹️ Open-source, developer-first LLMOps platform designed to streamline prompt design, version...
promptdesk/promptdesk
Promptdesk is a tool designed for effectively creating, organizing, and evaluating prompts and...
cremich/promptz
Resource Library for AI-assisted software development with kiro
scafoldr/scafoldr
Building an open-source alternative to v0 and Lovable.