diffusers and Awesome-Diffusion-Models
One project is a PyTorch library for implementing diffusion models, while the other is a curated list of resources and papers about diffusion models, making them complementary for researchers and developers in the field.
About diffusers
huggingface/diffusers
🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.
Provides modular, composable building blocks—including interchangeable noise schedulers, pretrained models, and end-to-end pipelines—enabling both quick inference and custom system design via the Hugging Face Model Hub. Emphasizes transparency and customizability over abstraction, allowing developers to inspect and modify individual diffusion components rather than treating them as black boxes.
About Awesome-Diffusion-Models
diff-usion/Awesome-Diffusion-Models
A collection of resources and papers on Diffusion Models
Organizes peer-reviewed papers and tutorials across diverse diffusion model applications—vision (generation, segmentation, medical imaging), audio (synthesis, enhancement, TTS), NLP, time-series forecasting, molecular generation, and reinforcement learning. Curates foundational resources including mathematical explanations, video lectures, and runnable Jupyter notebooks that bridge theory to implementation. Structured taxonomy enables researchers to locate domain-specific papers and learning materials across the rapidly expanding diffusion literature.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work