Awesome-Video-Diffusion and awesome-diffusion-v2v

These are complementary resources where one provides a broad survey of video diffusion models across multiple tasks, while the other offers a specialized, deeper focus on the video-to-video editing subset with benchmark implementations.

Awesome-Video-Diffusion
56
Established
awesome-diffusion-v2v
41
Emerging
Maintenance 20/25
Adoption 10/25
Maturity 8/25
Community 18/25
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 9/25
Stars: 5,531
Forks: 345
Downloads:
Commits (30d): 20
Language:
License:
Stars: 280
Forks: 9
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No License No Package No Dependents
No Package No Dependents

About Awesome-Video-Diffusion

showlab/Awesome-Video-Diffusion

A curated list of recent diffusion models for video generation, editing, and various other applications.

Organized into 20+ specialized categories, the collection spans foundation models and inference frameworks (HunyuanVideo, LTX-Video, Cosmos) alongside task-specific implementations for controllable generation, motion customization, video enhancement, talking head synthesis, and emerging domains like 4D content and game generation. The curated entries link to implementations built on diffusion architectures with complementary techniques including flow matching, reinforcement learning policies, and 3D/NeRF priors for physics-aware synthesis. Each resource includes direct GitHub repositories, arXiv papers, and project websites for reproducibility and comparative benchmarking across the video diffusion ecosystem.

About awesome-diffusion-v2v

wenhao728/awesome-diffusion-v2v

Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translation. And a video editing benchmark code.

Scores updated daily from GitHub, PyPI, and npm data. How scores work