diffusers and stable-diffusion-videos

Diffusers is a foundational framework that provides the core diffusion model implementations and pipelines, while Stable Diffusion Videos builds on top of it to add specialized video generation capabilities through latent space interpolation between prompts.

diffusers
90
Verified
stable-diffusion-videos
66
Established
Maintenance 25/25
Adoption 15/25
Maturity 25/25
Community 25/25
Maintenance 6/25
Adoption 15/25
Maturity 25/25
Community 20/25
Stars: 33,029
Forks: 6,832
Downloads: —
Commits (30d): 82
Language: Python
License: Apache-2.0
Stars: 4,671
Forks: 449
Downloads: 222
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
No risk flags

About diffusers

huggingface/diffusers

🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch.

Provides modular, composable building blocks—including interchangeable noise schedulers, pretrained models, and end-to-end pipelines—enabling both quick inference and custom system design via the Hugging Face Model Hub. Emphasizes transparency and customizability over abstraction, allowing developers to inspect and modify individual diffusion components rather than treating them as black boxes.

About stable-diffusion-videos

nateraw/stable-diffusion-videos

Create 🔥 videos with Stable Diffusion by exploring the latent space and morphing between text prompts

Implements spherical linear interpolation (SLERP) in the latent space between seed vectors to generate smooth frame sequences, with optional audio-sync capabilities that tempo-match interpolation steps to music beats. Built as a Hugging Face Diffusers pipeline wrapper supporting float16 inference on CUDA/MPS, it includes a Gradio web interface for interactive video generation with configurable guidance scales and diffusion steps.

Scores updated daily from GitHub, PyPI, and npm data. How scores work