huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
Provides modular, composable building blocks—including interchangeable noise schedulers, pretrained models, and end-to-end pipelines—enabling both quick inference and custom system design via the Hugging Face Model Hub. Emphasizes transparency and customizability over abstraction, allowing developers to inspect and modify individual diffusion components rather than treating them as black boxes.
33,029 stars. Used by 53 other packages. Actively maintained with 82 commits in the last 30 days. Available on PyPI.
Stars
33,029
Forks
6,832
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
82
Dependencies
9
Reverse dependents
53
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/huggingface/diffusers"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
bghira/SimpleTuner
A general fine-tuning kit geared toward image/video/audio diffusion models.
mcmonkeyprojects/SwarmUI
SwarmUI (formerly StableSwarmUI), A Modular Stable Diffusion Web-User-Interface, with an...
nateraw/stable-diffusion-videos
Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts
AUTOMATIC1111/stable-diffusion-webui
Stable Diffusion web UI
probabilists/azula
Diffusion models in PyTorch