Awesome-Video-Diffusion and Awesome-Controllable-T2I-Diffusion-Models

Maintenance 20/25
Adoption 10/25
Maturity 8/25
Community 18/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 12/25
Stars: 5,531
Forks: 345
Downloads:
Commits (30d): 20
Language:
License:
Stars: 1,112
Forks: 33
Downloads:
Commits (30d): 0
Language:
License: MIT
No License No Package No Dependents
Stale 6m No Package No Dependents

About Awesome-Video-Diffusion

showlab/Awesome-Video-Diffusion

A curated list of recent diffusion models for video generation, editing, and various other applications.

Organized into 20+ specialized categories, the collection spans foundation models and inference frameworks (HunyuanVideo, LTX-Video, Cosmos) alongside task-specific implementations for controllable generation, motion customization, video enhancement, talking head synthesis, and emerging domains like 4D content and game generation. The curated entries link to implementations built on diffusion architectures with complementary techniques including flow matching, reinforcement learning policies, and 3D/NeRF priors for physics-aware synthesis. Each resource includes direct GitHub repositories, arXiv papers, and project websites for reproducibility and comparative benchmarking across the video diffusion ecosystem.

About Awesome-Controllable-T2I-Diffusion-Models

PRIV-Creation/Awesome-Controllable-T2I-Diffusion-Models

A collection of resources on controllable generation with text-to-image diffusion models.

Scores updated daily from GitHub, PyPI, and npm data. How scores work