Awesome-Video-Diffusion and awesome-diffusion-v2v
These are complementary resources where one provides a broad survey of video diffusion models across multiple tasks, while the other offers a specialized, deeper focus on the video-to-video editing subset with benchmark implementations.
About Awesome-Video-Diffusion
showlab/Awesome-Video-Diffusion
A curated list of recent diffusion models for video generation, editing, and various other applications.
Organized into 20+ specialized categories, the collection spans foundation models and inference frameworks (HunyuanVideo, LTX-Video, Cosmos) alongside task-specific implementations for controllable generation, motion customization, video enhancement, talking head synthesis, and emerging domains like 4D content and game generation. The curated entries link to implementations built on diffusion architectures with complementary techniques including flow matching, reinforcement learning policies, and 3D/NeRF priors for physics-aware synthesis. Each resource includes direct GitHub repositories, arXiv papers, and project websites for reproducibility and comparative benchmarking across the video diffusion ecosystem.
About awesome-diffusion-v2v
wenhao728/awesome-diffusion-v2v
Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translation. And a video editing benchmark code.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work