interpret and imodels

These are complementary tools: imodels provides inherently interpretable model implementations optimized for direct use, while interpret offers post-hoc explanation techniques and a unified interface for explaining both interpretable and black-box models, making them useful together in an interpretability workflow.

interpret
72
Verified
imodels
75
Verified
Maintenance 25/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 10/25
Adoption 21/25
Maturity 25/25
Community 19/25
Stars: 6,813
Forks: 778
Downloads:
Commits (30d): 74
Language: C++
License: MIT
Stars: 1,574
Forks: 136
Downloads: 44,576
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No Package No Dependents
No risk flags

About interpret

interpretml/interpret

Fit interpretable models. Explain blackbox machine learning.

Combines glassbox models (EBM, decision trees, linear models) with post-hoc explainers (SHAP, LIME, partial dependence) in a unified API. Features Explainable Boosting Machines that match state-of-the-art blackbox performance while remaining fully interpretable with automatic interaction detection and differential privacy support. Integrates with scikit-learn ecosystems and provides Plotly/Dash-based dashboards for both global and local explanations across multiple models.

About imodels

csinva/imodels

Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).

Scores updated daily from GitHub, PyPI, and npm data. How scores work