Awesome-Video-Diffusion and awesome-video-generation

These are complementary curated resources that serve the same audience—one focuses specifically on diffusion-based approaches while the other covers the broader video generation landscape—making them useful to reference together rather than as alternatives.

Maintenance 20/25
Adoption 10/25
Maturity 8/25
Community 18/25
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 14/25
Stars: 5,531
Forks: 345
Downloads:
Commits (30d): 20
Language:
License:
Stars: 753
Forks: 38
Downloads:
Commits (30d): 0
Language: TeX
License: MIT
No License No Package No Dependents
No Package No Dependents

About Awesome-Video-Diffusion

showlab/Awesome-Video-Diffusion

A curated list of recent diffusion models for video generation, editing, and various other applications.

Organized into 20+ specialized categories, the collection spans foundation models and inference frameworks (HunyuanVideo, LTX-Video, Cosmos) alongside task-specific implementations for controllable generation, motion customization, video enhancement, talking head synthesis, and emerging domains like 4D content and game generation. The curated entries link to implementations built on diffusion architectures with complementary techniques including flow matching, reinforcement learning policies, and 3D/NeRF priors for physics-aware synthesis. Each resource includes direct GitHub repositories, arXiv papers, and project websites for reproducibility and comparative benchmarking across the video diffusion ecosystem.

About awesome-video-generation

AlonzoLeeeooo/awesome-video-generation

A collection of awesome video generation studies.

Organizes research across multiple video generation tasks—text-to-video, image-to-video, video editing, audio-to-video, and human image animation—with curated papers from major venues (CVPR, NeurIPS, ICCV) and accompanying resources like model weights and benchmark datasets. Maintains structured, chronologically-indexed references with direct links to papers, code repositories, and project pages, enabling researchers to track the evolution of video generation methods across years and conferences. Actively updated with recent breakthroughs including diffusion-based approaches and personalized generation techniques, serving as a comprehensive literature index for the video generation community.

Scores updated daily from GitHub, PyPI, and npm data. How scores work