AIF360 and AI_fairness

These are competitors offering overlapping fairness auditing capabilities, with AIF360 being the mature, production-ready option (comprehensive metrics, mitigation algorithms, active maintenance) while AI_fairness appears to be an educational or experimental resource with minimal adoption.

AIF360
79
Verified
AI_fairness
32
Emerging
Maintenance 6/25
Adoption 23/25
Maturity 25/25
Community 25/25
Maintenance 0/25
Adoption 5/25
Maturity 9/25
Community 18/25
Stars: 2,763
Forks: 902
Downloads: 34,451
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 14
Forks: 12
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No risk flags
Stale 6m No Package No Dependents

About AIF360

Trusted-AI/AIF360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Provides pre- and in-processing debiasing algorithms (reweighting, disparate impact removal, adversarial debiasing) alongside 20+ fairness metrics spanning group fairness, individual fairness, and sample distortion measures. Available in both Python and R with modular dependencies, allowing users to install only required algorithm backends (TensorFlow for adversarial debiasing, CVXPY for optimization-based methods). Extensible architecture designed for research-to-practice translation across finance, HR, healthcare, and education domains.

About AI_fairness

Ali-Alameer/AI_fairness

This GitHub repository offers resources to create fair and unbiased AI systems, including libraries, tools and tutorials on identifying and mitigating bias in machine learning models and implementing fairness in AI.

Scores updated daily from GitHub, PyPI, and npm data. How scores work