Awesome-Video-Diffusion and Awesome-Video-Diffusion-Models

These are near-duplicate curated lists with overlapping scope—one is a community-maintained collection of video diffusion resources while the other is a survey paper's accompanying repository—making them competitive alternatives rather than complementary tools.

Maintenance 20/25
Adoption 10/25
Maturity 8/25
Community 18/25
Maintenance 16/25
Adoption 10/25
Maturity 8/25
Community 16/25
Stars: 5,531
Forks: 345
Downloads:
Commits (30d): 20
Language:
License:
Stars: 2,282
Forks: 112
Downloads:
Commits (30d): 5
Language:
License:
No License No Package No Dependents
No License No Package No Dependents

About Awesome-Video-Diffusion

showlab/Awesome-Video-Diffusion

A curated list of recent diffusion models for video generation, editing, and various other applications.

Organized into 20+ specialized categories, the collection spans foundation models and inference frameworks (HunyuanVideo, LTX-Video, Cosmos) alongside task-specific implementations for controllable generation, motion customization, video enhancement, talking head synthesis, and emerging domains like 4D content and game generation. The curated entries link to implementations built on diffusion architectures with complementary techniques including flow matching, reinforcement learning policies, and 3D/NeRF priors for physics-aware synthesis. Each resource includes direct GitHub repositories, arXiv papers, and project websites for reproducibility and comparative benchmarking across the video diffusion ecosystem.

About Awesome-Video-Diffusion-Models

ChenHsing/Awesome-Video-Diffusion-Models

[CSUR] A Survey on Video Diffusion Models

Comprehensive curated resource covering diffusion-based approaches for video generation and editing tasks, with categorized sections spanning text-to-video synthesis, pose/instruction/sound-guided generation, video completion, and editing methods. Organizes research across training-based and training-free paradigms, plus foundational models and toolboxes like Stable Video Diffusion, AnimateDiff, and Open-Sora implementations. Serves as a taxonomy of diffusion architectures—from U-Net and Transformer variants to latent diffusion approaches—enabling researchers to identify methodological patterns across video generation and manipulation applications.

Scores updated daily from GitHub, PyPI, and npm data. How scores work