interpret and awesome-machine-learning-interpretability

Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 20/25
Adoption 10/25
Maturity 16/25
Community 23/25
Stars: 6,813
Forks: 778
Downloads:
Commits (30d): 74
Language: C++
License: MIT
Stars: 3,996
Forks: 623
Downloads:
Commits (30d): 10
Language:
License: CC0-1.0
No Package No Dependents
No Package No Dependents

About interpret

interpretml/interpret

Fit interpretable models. Explain blackbox machine learning.

Combines glassbox models (EBM, decision trees, linear models) with post-hoc explainers (SHAP, LIME, partial dependence) in a unified API. Features Explainable Boosting Machines that match state-of-the-art blackbox performance while remaining fully interpretable with automatic interaction detection and differential privacy support. Integrates with scikit-learn ecosystems and provides Plotly/Dash-based dashboards for both global and local explanations across multiple models.

About awesome-machine-learning-interpretability

jphall663/awesome-machine-learning-interpretability

A curated list of awesome responsible machine learning resources.

Organizes resources across interpretability, fairness, governance, and safety—spanning technical tools (Python/R/JavaScript packages, benchmarks, datasets), policy frameworks, incident documentation, and critical perspectives on AI's environmental and social costs. The collection bridges academic research, regulatory guidance, and real-world AI failure case studies, enabling practitioners to implement responsible ML across the full development lifecycle. Actively maintained with structured categories for education, auditing checklists, red-teaming resources, and emerging topics like generative AI explainability and agentic AI governance.

Scores updated daily from GitHub, PyPI, and npm data. How scores work