yuanchenyang/smalldiffusion
Simple and readable code for training and sampling from diffusion models
Implements core diffusion training and sampling in under 100 lines of PyTorch, supporting multiple architectures (MLPs, U-Nets, Diffusion Transformers) and noise schedules (LogLinear, DDPM, LDM). Includes built-in support for conditional generation via classifier-free guidance and integrates seamlessly with Hugging Face Diffusers for sampling from pretrained models like Stable Diffusion. Designed for rapid experimentation from toy 2D datasets to large-scale image generation on multi-GPU setups via Accelerate.
715 stars and 1,227 monthly downloads. No commits in the last 6 months. Available on PyPI.
Stars
715
Forks
55
Language
Python
License
MIT
Category
Last pushed
Jun 14, 2025
Monthly downloads
1,227
Commits (30d)
0
Dependencies
6
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/yuanchenyang/smalldiffusion"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
quantgirluk/aleatory
📦 Python library for Stochastic Processes Simulation and Visualisation
TuftsBCB/RegDiffusion
Diffusion model for gene regulatory network inference.
blei-lab/treeffuser
Treeffuser is an easy-to-use package for probabilistic prediction and probabilistic regression...
NVlabs/FastGen
NVIDIA FastGen: Fast Generation from Diffusion Models
lvyufeng/denoising-diffusion-mindspore
Implementation of Denoising Diffusion Probabilistic Model in MindSpore