Awesome-Video-Diffusion-Models and awesome-diffusion-v2v
These are ecosystem siblings — one is a broad survey aggregating video diffusion model research across multiple applications, while the other is a specialized collection focused specifically on the video-to-video translation subset of that broader landscape.
About Awesome-Video-Diffusion-Models
ChenHsing/Awesome-Video-Diffusion-Models
[CSUR] A Survey on Video Diffusion Models
Comprehensive curated resource covering diffusion-based approaches for video generation and editing tasks, with categorized sections spanning text-to-video synthesis, pose/instruction/sound-guided generation, video completion, and editing methods. Organizes research across training-based and training-free paradigms, plus foundational models and toolboxes like Stable Video Diffusion, AnimateDiff, and Open-Sora implementations. Serves as a taxonomy of diffusion architectures—from U-Net and Transformer variants to latent diffusion approaches—enabling researchers to identify methodological patterns across video generation and manipulation applications.
About awesome-diffusion-v2v
wenhao728/awesome-diffusion-v2v
Awesome diffusion Video-to-Video (V2V). A collection of paper on diffusion model-based video editing, aka. video-to-video (V2V) translation. And a video editing benchmark code.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work