AIF360 and responsibly

AIF360 is a mature, production-ready fairness auditing framework that ResponsiblyAI complements by offering specialized bias detection and mitigation techniques, making them best used together rather than as direct alternatives.

AIF360
79
Verified
responsibly
44
Emerging
Maintenance 6/25
Adoption 23/25
Maturity 25/25
Community 25/25
Maintenance 0/25
Adoption 9/25
Maturity 16/25
Community 19/25
Stars: 2,763
Forks: 902
Downloads: 34,451
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 100
Forks: 22
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m No Package No Dependents

About AIF360

Trusted-AI/AIF360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Provides pre- and in-processing debiasing algorithms (reweighting, disparate impact removal, adversarial debiasing) alongside 20+ fairness metrics spanning group fairness, individual fairness, and sample distortion measures. Available in both Python and R with modular dependencies, allowing users to install only required algorithm backends (TensorFlow for adversarial debiasing, CVXPY for optimization-based methods). Extensible architecture designed for research-to-practice translation across finance, HR, healthcare, and education domains.

About responsibly

ResponsiblyAI/responsibly

Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰

Scores updated daily from GitHub, PyPI, and npm data. How scores work