ChenHsing/Awesome-Video-Diffusion-Models
[CSUR] A Survey on Video Diffusion Models
Comprehensive curated resource covering diffusion-based approaches for video generation and editing tasks, with categorized sections spanning text-to-video synthesis, pose/instruction/sound-guided generation, video completion, and editing methods. Organizes research across training-based and training-free paradigms, plus foundational models and toolboxes like Stable Video Diffusion, AnimateDiff, and Open-Sora implementations. Serves as a taxonomy of diffusion architectures—from U-Net and Transformer variants to latent diffusion approaches—enabling researchers to identify methodological patterns across video generation and manipulation applications.
2,282 stars. Actively maintained with 5 commits in the last 30 days.
Stars
2,282
Forks
112
Language
—
License
—
Category
Last pushed
Mar 14, 2026
Commits (30d)
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/ChenHsing/Awesome-Video-Diffusion-Models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
showlab/Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, and various other applications.
lixinustc/Awesome-diffusion-model-for-image-processing
one summary of diffusion-based image processing, including restoration, enhancement, coding,...
xlite-dev/Awesome-DiT-Inference
📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization,...
TUM-AVS/FM-AD-Survey
This repository collects research papers of large Foundation Models for Scenario Generation and...
wangkai930418/awesome-diffusion-categorized
collection of diffusion model papers categorized by their subareas