thunderous77/GLaPE

Official implementation for "GLaPE: Gold Label-agnostic Prompt Evaluation and Optimization for Large Language Models" (stay tuned & more will be updated)

20
/ 100
Experimental

This project helps AI developers and researchers optimize the prompts they use with large language models without needing human-labeled 'gold standard' answers. You input a dataset and an initial prompt, and it outputs an improved prompt that performs better. This is for anyone building or fine-tuning LLM applications who wants to improve prompt performance more efficiently.

No commits in the last 6 months.

Use this if you are developing LLM applications and want to iteratively improve your prompts without the time and cost of creating extensive human-labeled evaluation datasets.

Not ideal if you already have a perfectly curated, gold-standard labeled dataset for prompt evaluation, or if your prompt optimization needs are very simple.

LLM-development prompt-engineering AI-research natural-language-processing machine-learning-operations
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

Last pushed

Feb 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/thunderous77/GLaPE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.