NVlabs/Sana
SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer
Implements linear-complexity diffusion transformers with block causal attention to enable efficient training and inference across image and video generation tasks, supporting variants like SANA-Sprint (one-step) and SANA-Video (temporal synthesis). Integrates with major frameworks including HuggingFace Diffusers, ComfyUI, SGLang serving, and Cosmos-RL for reinforcement learning post-training. Provides complete training pipelines with multi-scale WebDataset support, 4-bit quantization, and ControlNet compatibility for production deployments.
5,000 stars. Actively maintained with 4 commits in the last 30 days.
Stars
5,000
Forks
333
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 10, 2026
Commits (30d)
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/NVlabs/Sana"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
FoundationVision/VAR
[NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈]...
nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
huggingface/finetrainers
Scalable and memory-optimized training of diffusion models
eps696/aphantasia
CLIP + FFT/DWT/RGB = text to image/video
AssemblyAI-Community/MinImagen
MinImagen: A minimal implementation of the Imagen text-to-image model