RafalWilinski/prompt-testing-framework
Test how good your prompts are against the expected results.
This tool helps you evaluate and improve the quality of your AI prompts for large language models like OpenAI's. You provide example prompts and their desired responses, and the framework compares these expectations against the actual outputs from the AI model. This is for anyone creating or refining prompts for AI applications who needs to ensure consistent, accurate results.
No commits in the last 6 months.
Use this if you want to systematically check if your AI prompts consistently produce the expected outputs.
Not ideal if you are looking for a general-purpose testing framework for code or need to test AI models beyond prompt effectiveness.
Stars
7
Forks
—
Language
TypeScript
License
—
Category
Last pushed
Apr 12, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/RafalWilinski/prompt-testing-framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
genieincodebottle/schemalock
LLM output contract testing CLI, define what your pipeline must return, test it against any...
antsanchez/prompto
Interact with various LLMs in your browser (LangChain.js, Angular)
Coolhand-Labs/coolhand-ruby
Zero-config LLM cost & quality monitoring for Ruby apps - automatically log AI API calls and...
joshualamerton/prompt-trace
Prompt and response tracing for LLM workflows
suhjohn/llm-workbench
UI for testing prompts across various datasets locally