cvs-health/langfair
LangFair is a Python library for conducting use-case level LLM bias and fairness assessments
When building applications with large language models, it's crucial to ensure they are fair and unbiased for your specific use case. This project helps you assess potential biases in LLM outputs, such as toxicity or stereotypes, by allowing you to feed in your own real-world prompts. It provides metrics to understand how an LLM performs in terms of fairness. This tool is for AI product managers, responsible AI teams, and data scientists developing and deploying LLM-powered applications.
255 stars. Available on PyPI.
Use this if you need to evaluate the bias and fairness of an LLM's responses for your specific application, especially for text generation and summarization tasks.
Not ideal if you are looking for a general-purpose LLM benchmark tool that doesn't focus on use-case specific prompts or output-based fairness metrics.
Stars
255
Forks
41
Language
Python
License
—
Category
Last pushed
Jan 09, 2026
Commits (30d)
0
Dependencies
17
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cvs-health/langfair"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
google-deepmind/long-form-factuality
Benchmarking long-form factuality in large language models. Original code for our paper...
gnai-creator/aletheion-llm-v2
Decoder-only LLM with integrated epistemic tomography. Knows what it doesn't know.
sandylaker/ib-edl
Calibrating LLMs with Information-Theoretic Evidential Deep Learning (ICLR 2025)
MLD3/steerability
An open-source evaluation framework for measuring LLM steerability.
nightdessert/Retrieval_Head
open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality