RafalWilinski/prompt-testing-framework

Test how good your prompts are against the expected results.

12
/ 100
Experimental

This tool helps you evaluate and improve the quality of your AI prompts for large language models like OpenAI's. You provide example prompts and their desired responses, and the framework compares these expectations against the actual outputs from the AI model. This is for anyone creating or refining prompts for AI applications who needs to ensure consistent, accurate results.

No commits in the last 6 months.

Use this if you want to systematically check if your AI prompts consistently produce the expected outputs.

Not ideal if you are looking for a general-purpose testing framework for code or need to test AI models beyond prompt effectiveness.

AI prompt engineering LLM application development AI content generation conversational AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

TypeScript

License

Last pushed

Apr 12, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/RafalWilinski/prompt-testing-framework"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.