DirtyHarryLYL/Transformer-in-Vision
Recent Transformer-based CV and related works.
A curated collection of transformer architecture papers, implementations, and research spanning vision-language pretraining, multimodal learning, generative models (diffusion and GANs), and specialized domains like medical imaging, 3D vision, and autonomous driving. Integrates resources from Hugging Face, JAX (SCENIC), and PyTorch ecosystems, covering foundational works (ViT, CLIP, DALL-E) through recent advances in knowledge distillation and efficient training strategies.
1,339 stars. No commits in the last 6 months.
Stars
1,339
Forks
143
Language
—
License
—
Category
Last pushed
Aug 22, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DirtyHarryLYL/Transformer-in-Vision"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
pairlab/SlotFormer
Code release for ICLR 2023 paper: SlotFormer on object-centric dynamics models
ChristophReich1996/Swin-Transformer-V2
PyTorch reimplementation of the paper "Swin Transformer V2: Scaling Up Capacity and Resolution"...
prismformore/Multi-Task-Transformer
Code of ICLR2023 paper "TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene...
kyegomez/MegaVIT
The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"
uakarsh/latr
Implementation of LaTr: Layout-aware transformer for scene-text VQA,a novel multimodal...