fairlearn and inFairness
These are complementary tools addressing different fairness paradigms—fairlearn focuses on group fairness metrics (demographic parity, equalized odds) while inFairness specializes in individual fairness constraints (similarity-based fairness during training)—making them suitable for use together depending on which fairness definition your use case requires.
About fairlearn
fairlearn/fairlearn
A Python package to assess and improve fairness of machine learning models.
Provides dual assessment and mitigation tools: metrics for identifying which demographic groups experience allocation or quality-of-service harms, and algorithms for reducing unfairness across multiple fairness definitions. Implements group fairness constraints that enforce comparable model behavior across specified demographic groups, enabling data scientists to quantify fairness trade-offs against accuracy. Integrates with standard ML workflows through scikit-learn-compatible APIs and includes Jupyter notebooks demonstrating real-world applications in hiring, lending, and admissions scenarios.
About inFairness
IBM/inFairness
PyTorch package to train and audit ML models for Individual Fairness
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work