interpret and imodels
These are complementary tools: imodels provides inherently interpretable model implementations optimized for direct use, while interpret offers post-hoc explanation techniques and a unified interface for explaining both interpretable and black-box models, making them useful together in an interpretability workflow.
About interpret
interpretml/interpret
Fit interpretable models. Explain blackbox machine learning.
Combines glassbox models (EBM, decision trees, linear models) with post-hoc explainers (SHAP, LIME, partial dependence) in a unified API. Features Explainable Boosting Machines that match state-of-the-art blackbox performance while remaining fully interpretable with automatic interaction detection and differential privacy support. Integrates with scikit-learn ecosystems and provides Plotly/Dash-based dashboards for both global and local explanations across multiple models.
About imodels
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work