UltraDeep-Tech/lcb-bench
LLM Cognitive Bias Benchmark: 1,500 test cases measuring 30 cognitive biases across 7 categories. Produces a standardized LCB Score for cross-model comparison.
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 20, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/UltraDeep-Tech/lcb-bench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
gnai-creator/aletheion-llm-v2
Decoder-only LLM with integrated epistemic tomography. Knows what it doesn't know.
bws82/biasclear
Structural bias detection and correction engine built on Persistent Influence Theory (PIT)
KID-22/LLM-IR-Bias-Fairness-Survey
This is the repo for the survey of Bias and Fairness in IR with LLMs.
BetterForAll/HonestyMeter
HonestyMeter: An NLP-powered framework for evaluating objectivity and bias in media content,...