fairlearn and inFairness

These are complementary tools addressing different fairness paradigms—fairlearn focuses on group fairness metrics (demographic parity, equalized odds) while inFairness specializes in individual fairness constraints (similarity-based fairness during training)—making them suitable for use together depending on which fairness definition your use case requires.

fairlearn
91
Verified
inFairness
49
Emerging
Maintenance 16/25
Adoption 25/25
Maturity 25/25
Community 25/25
Maintenance 2/25
Adoption 9/25
Maturity 25/25
Community 13/25
Stars: 2,213
Forks: 484
Downloads: 170,696
Commits (30d): 2
Language: Python
License: MIT
Stars: 66
Forks: 8
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m

About fairlearn

fairlearn/fairlearn

A Python package to assess and improve fairness of machine learning models.

Provides dual assessment and mitigation tools: metrics for identifying which demographic groups experience allocation or quality-of-service harms, and algorithms for reducing unfairness across multiple fairness definitions. Implements group fairness constraints that enforce comparable model behavior across specified demographic groups, enabling data scientists to quantify fairness trade-offs against accuracy. Integrates with standard ML workflows through scikit-learn-compatible APIs and includes Jupyter notebooks demonstrating real-world applications in hiring, lending, and admissions scenarios.

About inFairness

IBM/inFairness

PyTorch package to train and audit ML models for Individual Fairness

Scores updated daily from GitHub, PyPI, and npm data. How scores work