laura-rieger/deep-explanation-penalization
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
128 stars. No commits in the last 6 months.
Stars
128
Forks
14
Language
Jupyter Notebook
License
MIT
Last pushed
Mar 22, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/laura-rieger/deep-explanation-penalization"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
obss/sahi
Framework agnostic sliced/tiled inference + interactive ui + error analysis plots
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent...
understandable-machine-intelligence-lab/Quantus
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations
ModelOriented/DALEX
moDel Agnostic Language for Exploration and eXplanation
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling...