sayakpaul/probing-vits
Probing the representations of Vision Transformers.
Provides TensorFlow implementations of ViT, DeiT, and DINO with multiple probing techniques including attention rollout, mean attention distance, positional embedding visualization, and per-head attention extraction. Pre-trained weights from official codebases are loaded and validated against ImageNet-1k benchmarks. Interactive Hugging Face Spaces demos enable real-time attention visualization on custom images, complemented by Jupyter notebooks demonstrating video-to-attention heatmap generation and representation analysis.
340 stars. No commits in the last 6 months.
Stars
340
Forks
22
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Oct 05, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/sayakpaul/probing-vits"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UdbhavPrasad072300/Transformer-Implementations
Library - Vanilla, ViT, DeiT, BERT, GPT
jaehyunnn/ViTPose_pytorch
An unofficial implementation of ViTPose [Y. Xu et al., 2022]
tintn/vision-transformer-from-scratch
A Simplified PyTorch Implementation of Vision Transformer (ViT)
icon-lab/ResViT
Official Implementation of ResViT: Residual Vision Transformers for Multi-modal Medical Image Synthesis
gupta-abhay/pytorch-vit
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale