cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
Implements a "Bring Your Own Prompts" (BYOP) approach enabling use-case-specific fairness evaluations with output-based metrics across toxicity, stereotype association, and counterfactual parity—without requiring model internals access. Integrates with any LangChain LLM provider and offers both granular metric classes (ToxicityMetrics, StereotypeMetrics, CounterfactualMetrics) and an AutoEval convenience wrapper for streamlined text generation and summarization assessments.
255 stars and 661 monthly downloads. Available on PyPI.
Stars
255
Forks
41
Language
Python
License
—
Category
Last pushed
Jan 09, 2026
Monthly downloads
661
Commits (30d)
0
Dependencies
17
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/cvs-health/langfair"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
gnai-creator/aletheion-llm-v2
Decoder-only LLM with integrated epistemic tomography. Knows what it doesn't know.
bws82/biasclear
Structural bias detection and correction engine built on Persistent Influence Theory (PIT)
KID-22/LLM-IR-Bias-Fairness-Survey
This is the repo for the survey of Bias and Fairness in IR with LLMs.
BetterForAll/HonestyMeter
HonestyMeter: An NLP-powered framework for evaluating objectivity and bias in media content,...
h-stefanidis/xc3-bias-mitigation-llm
Determining bias in LLMs with Jupyter notebooks and Python scripts. Includes bias audits,...