interpret and awesome-machine-learning-interpretability
About interpret
interpretml/interpret
Fit interpretable models. Explain blackbox machine learning.
Combines glassbox models (EBM, decision trees, linear models) with post-hoc explainers (SHAP, LIME, partial dependence) in a unified API. Features Explainable Boosting Machines that match state-of-the-art blackbox performance while remaining fully interpretable with automatic interaction detection and differential privacy support. Integrates with scikit-learn ecosystems and provides Plotly/Dash-based dashboards for both global and local explanations across multiple models.
About awesome-machine-learning-interpretability
jphall663/awesome-machine-learning-interpretability
A curated list of awesome responsible machine learning resources.
Organizes resources across interpretability, fairness, governance, and safety—spanning technical tools (Python/R/JavaScript packages, benchmarks, datasets), policy frameworks, incident documentation, and critical perspectives on AI's environmental and social costs. The collection bridges academic research, regulatory guidance, and real-world AI failure case studies, enabling practitioners to implement responsible ML across the full development lifecycle. Actively maintained with structured categories for education, auditing checklists, red-teaming resources, and emerging topics like generative AI explainability and agentic AI governance.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work