Awesome-Transformer-Attention and awesome-visual-representation-learning-with-transformers

These two tools are competitors, as both aim to provide comprehensive lists of papers, code, and websites related to Vision Transformers and attention mechanisms in computer vision.

Maintenance 0/25
Adoption 10/25
Maturity 8/25
Community 20/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 18/25
Stars: 5,022
Forks: 495
Downloads:
Commits (30d): 0
Language:
License:
Stars: 269
Forks: 37
Downloads:
Commits (30d): 0
Language:
License: MIT
No License Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About Awesome-Transformer-Attention

cmhungsteve/Awesome-Transformer-Attention

An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites

Organizes transformer and attention papers across 15+ vision tasks—from classification and detection to video analysis, 3D point clouds, medical imaging, and low-level restoration—with separate curated lists for multi-modal and emerging applications. Papers are categorized by architectural variant (pure attention, conv-stem hybrids, efficient transformers, attention-free alternatives) and continuously updated with recent conference proceedings from CVPR, ICCV, NeurIPS, and ICML. Includes implementation links and tutorial resources alongside paper references to support practitioners working with PyTorch/TensorFlow transformer frameworks across diverse vision domains.

About awesome-visual-representation-learning-with-transformers

alohays/awesome-visual-representation-learning-with-transformers

Awesome Transformers (self-attention) in Computer Vision

Scores updated daily from GitHub, PyPI, and npm data. How scores work