Awesome-Transformer-Attention and awesome-visual-representation-learning-with-transformers
These two tools are competitors, as both aim to provide comprehensive lists of papers, code, and websites related to Vision Transformers and attention mechanisms in computer vision.
About Awesome-Transformer-Attention
cmhungsteve/Awesome-Transformer-Attention
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Organizes transformer and attention papers across 15+ vision tasks—from classification and detection to video analysis, 3D point clouds, medical imaging, and low-level restoration—with separate curated lists for multi-modal and emerging applications. Papers are categorized by architectural variant (pure attention, conv-stem hybrids, efficient transformers, attention-free alternatives) and continuously updated with recent conference proceedings from CVPR, ICCV, NeurIPS, and ICML. Includes implementation links and tutorial resources alongside paper references to support practitioners working with PyTorch/TensorFlow transformer frameworks across diverse vision domains.
About awesome-visual-representation-learning-with-transformers
alohays/awesome-visual-representation-learning-with-transformers
Awesome Transformers (self-attention) in Computer Vision
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work