shap and sage
SHAP is a mature, general-purpose library for computing Shapley-based explanations across multiple methods (SHAP, LIME, etc.), while SAGE is a specialized research tool focused specifically on Shapley-based feature importance estimation, making them **complements** for practitioners who want both broad explainability capabilities and specialized global importance metrics.
About shap
shap/shap
A game theoretic approach to explain the output of any machine learning model.
Based on the README, here's a technical summary: Implements fast exact algorithms for tree ensemble models (XGBoost, LightGBM, CatBoost, scikit-learn, PySpark) via optimized C++ backends, alongside approximation methods for deep learning (DeepExplainer leveraging DeepLIFT) and NLP transformers using coalitional game rules. Provides multiple visualization outputs—waterfall plots, force plots, dependence scatter plots, and beeswarm distributions—to show feature contributions at instance and global levels. Integrates directly with popular ML frameworks and Hugging Face transformers, supporting both tabular and text-based model explanations.
About sage
iancovert/sage
For calculating global feature importance using Shapley values.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work