rahulthadhani/llm-benchmark
A benchmark suite that tests how zero-shot, few-shot, chain-of-thought, and role prompting strategies affect LLM accuracy across 200 reasoning, coding, factual, and ambiguous tasks.
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Mar 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/rahulthadhani/llm-benchmark"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/promptbench
A unified evaluation framework for large language models
uptrain-ai/uptrain
UpTrain is an open-source unified platform to evaluate and improve Generative AI applications....
microsoftarchive/promptbench
A unified evaluation framework for large language models
gabe-mousa/Apolien
AI Safety Evaluation Library
levitation-opensource/Manipulative-Expression-Recognition
MER is a software that identifies and highlights manipulative communication in text from human...