davor10105/relative-absolute-magnitude-propagation
Explain the outputs of your Vision Transformers, Residual Networks and classic CNNs with absLRP and evaluate the explanations over multiple criteria using Global Attribution Evaluation.
No commits in the last 6 months.
Stars
4
Forks
1
Language
Python
License
—
Last pushed
Dec 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/davor10105/relative-absolute-magnitude-propagation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...