RyanHUNGry/Interpreting-Graph-Transformers-for-Long-Range-Interactions
Interpreting Graph Transformers for Long-Range Interactions proposes two explainability algorithms using learned attention matrices and integrated gradients. These explainability methods are built specifically for graph transformer architecture.
No commits in the last 6 months.
Stars
2
Forks
—
Language
Jupyter Notebook
License
—
Last pushed
Mar 10, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/RyanHUNGry/Interpreting-Graph-Transformers-for-Long-Range-Interactions"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jessevig/bertviz
BertViz: Visualize Attention in Transformer Models
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
EleutherAI/knowledge-neurons
A library for finding knowledge neurons in pretrained transformer models.
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for...
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model...