crjaensch/PromptoLab
A multi-platform app to serve as a prompts catalog, a LLM playground for running and optimizing prompts, plus a prompts evaluation and assessment playground.
Built with PySide6 for cross-platform desktop support, PromptoLab integrates with multiple LLM backends—both the `llm` command-line tool and LiteLLM library (supporting OpenAI, Groq, Google Gemini, and local models like Ollama)—and uses Qt's QSettings for persistent configuration across Windows, macOS, and Linux. Beyond basic prompt management, it includes AI-powered optimization via structured prompt patterns (TAG, PIC, LIFE) and iterative critique-and-refine loops, plus a synthetic test case generator that automatically creates diverse evaluation datasets and comparative grading for baseline performance tracking.
Stars
7
Forks
3
Language
Python
License
MIT
Category
Last pushed
Jan 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/crjaensch/PromptoLab"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Mirascope/lilypad
Open-source versioning, tracing, and annotation tooling.
Supervertaler/Supervertaler-Workbench
Open-source, AI-enhanced CAT tool with multi-LLM support, translation memory, glossary...
parea-ai/parea-sdk-py
Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea...
jeong-se-hun/autotune-skill
Eval-first tuning skill for prompts, docs, skills, and code with guards, holdouts, and stop rules.
MukundaKatta/PromptLab
Prompt experimentation workspace — A/B testing prompt variants with statistical significance testing