microsoft/promptpex
Test Generation for Prompts
Automatically extracts output rules from natural language prompts and generates targeted unit tests to validate whether LLM responses comply with those rules across different models. Uses LLM-based evaluation to assess test outcomes and integrates with OpenAI Evals API for standardized test export and execution via GitHub Models.
158 stars.
Stars
158
Forks
19
Language
TeX
License
CC-BY-4.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/microsoft/promptpex"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
dottxt-ai/outlines
Structured Outputs
takashiishida/arxiv-to-prompt
Transform arXiv papers into a single LaTeX source that can be used as a prompt for asking LLMs...
AI-secure/aug-pe
[ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text
Spr-Aachen/LLM-PromptMaster
A simple LLM-Powered chatbot software.
equinor/promptly
A prompt collection for testing and evaluation of LLMs.