Codegrammer999/prompt-bench

This is a benchmark suite comparing zero-shot, few-shot, Chain-of-Thought, and self-consistency on a classification task. Each run is traced in Langfuse with accuracy scores, latency, and token usage. The findings.md documents which technique wins and why.

15
/ 100
Experimental
No License No Package No Dependents
Maintenance 13 / 25
Adoption 1 / 25
Maturity 1 / 25
Community 0 / 25

How are scores calculated?

Stars

1

Forks

Language

Python

License

Last pushed

Mar 07, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/Codegrammer999/prompt-bench"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.