yuchenzhu-research/iclr2026-cao-prompt-drift-lab
A reproducible evaluation framework for studying how small prompt variations affect instruction-following behavior in large language models. The project focuses on instruction adherence, output format robustness, and semantic consistency, supported by standardized evaluation protocols and auditable artifacts.
Stars
—
Forks
—
Language
TeX
License
—
Category
Last pushed
Mar 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/yuchenzhu-research/iclr2026-cao-prompt-drift-lab"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/promptbench
A unified evaluation framework for large language models
uptrain-ai/uptrain
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications....
gabe-mousa/Apolien
AI Safety Evaluation Library
microsoftarchive/promptbench
A unified evaluation framework for large language models
babelcloud/LLM-RGB
LLM Reasoning and Generation Benchmark. Evaluate LLMs in complex scenarios systematically.