Video Editing Diffusion Diffusion Models

Advanced video editing and manipulation using diffusion models, including motion control, composition, object editing, and frame interpolation. Does NOT include general video generation from text, basic inpainting tools, or video segmentation without editing capabilities.

There are 135 video editing diffusion models tracked. 2 score above 70 (verified tier). The highest-rated is hao-ai-lab/FastVideo at 85/100 with 3,232 stars and 1,618 monthly downloads. 7 of the top 10 are actively maintained.

Get all 135 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=diffusion&subcategory=video-editing-diffusion&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Model Score Tier
1 hao-ai-lab/FastVideo

A unified inference and post-training framework for accelerated video generation.

85
Verified
2 thu-ml/TurboDiffusion

TurboDiffusion: 100–200× Acceleration for Video Diffusion Models

73
Verified
3 PKU-YuanGroup/Helios

Helios: Real Real-Time Long Video Generation Model

62
Established
4 ModelTC/LightX2V

Light Image Video Generation Inference Framework

61
Established
5 Lightricks/LTX-Video

Official repository for LTX-Video

56
Established
6 PKU-YuanGroup/MagicTime

[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators

54
Established
7 Tencent-Hunyuan/HunyuanImage-3.0

HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generation

53
Established
8 thu-ml/DiT-Extrapolation

Official implementation for "RIFLEx: A Free Lunch for Length Extrapolation...

53
Established
9 Tencent-Hunyuan/HunyuanVideo

HunyuanVideo: A Systematic Framework For Large Video Generation Model

52
Established
10 OpenMOSS/MOVA

MOVA: Towards Scalable and Synchronized Video–Audio Generation

50
Established
11 PKU-YuanGroup/ConsisID

[CVPR 2025 Highlight🔥] Identity-Preserving Text-to-Video Generation by...

49
Emerging
12 Fantasy-AMAP/fantasy-talking

[ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via...

48
Emerging
13 Advocate99/DiffGesture

[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation

45
Emerging
14 SandAI-org/MAGI-1

MAGI-1: Autoregressive Video Generation at Scale

45
Emerging
15 Tencent/MimicMotion

High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance

44
Emerging
16 YanWenKun/Hunyuan3D-2-WinPortable

📦Portable package for running Hunyuan3D 2.0/2.1 on Windows. | 混元 3D 2.0/2.1 整合包

43
Emerging
17 YoungSeng/DiffuseStyleGesture

DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with...

43
Emerging
18 Tencent-Hunyuan/HunyuanCustom

HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation

43
Emerging
19 G-U-N/Gen-L-Video

The official implementation for "Gen-L-Video: Multi-Text to Long Video...

42
Emerging
20 zli12321/FFGO-Video-Customization

Video Content Customization Using First Frame

42
Emerging
21 Stanford-TML/EDGE

Official PyTorch Implementation of EDGE (CVPR 2023)

42
Emerging
22 OpenDCAI/OpenWorldLib

Unified Codebase for Advanced World Models.

41
Emerging
23 EzioBy/Ditto

[CVPR 2026] Ditto: Scaling Instruction-Based Video Editing with a...

41
Emerging
24 Tencent-Hunyuan/HunyuanVideo-I2V

HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo

41
Emerging
25 Tencent-Hunyuan/HunyuanImage-2.1

HunyuanImage-2.1: An Efficient Diffusion Model for High-Resolution (2K)...

41
Emerging
26 TencentARC/GenCompositor

[ICLR 2026] GenCompositor: Generative Video Compositing with Diffusion Transformer

40
Emerging
27 nv-tlabs/ChronoEdit

[ICLR 2026] ChronoEdit: Towards Temporal Reasoning for Image Editing and...

39
Emerging
28 mit-han-lab/radial-attention

[NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay...

39
Emerging
29 SenseTime-FVG/OpenDWM

An open source code repository of driving world models, with training,...

39
Emerging
30 knightyxp/VideoCoF

[CVPR 2026] VideoCoF: Unified Video Editing with Temporal Reasoner

39
Emerging
31 omerbt/TokenFlow

Official Pytorch Implementation for "TokenFlow: Consistent Diffusion...

38
Emerging
32 PangzeCheung/OmniTransfer

[CVPR 2026] OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer

38
Emerging
33 QuanjianSong/UniVST

[TPAMI 2025] Official Pytorch Code of the Paper "UniVST: A Unified Framework...

38
Emerging
34 ChenyangQiQi/FateZero

[ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing"

38
Emerging
35 FareedKhan-dev/text2video-from-scratch

A Straightforward, Step-by-Step Implementation of a Video Diffusion Model

38
Emerging
36 foivospar/NED

PyTorch implementation for NED (CVPR 2022). It can be used to manipulate the...

37
Emerging
37 ali-vilab/VGen

Official repo for VGen: a holistic video generation ecosystem for video...

37
Emerging
38 hustvl/MobileI2V

[ArXiv 2025] MobileI2V: Fast and High-Resolution Image-to-Video on Mobile Devices

37
Emerging
39 text2cinemagraph/text2cinemagraph

Text2Cinemagraph: Text-Guided Synthesis of Eulerian Cinemagraphs [SIGGRAPH ASIA 2023]

37
Emerging
40 menyifang/MIMO

Official implementation of "MIMO: Controllable Character Video Synthesis...

36
Emerging
41 ali-vilab/videocomposer

Official repo for VideoComposer: Compositional Video Synthesis with Motion...

36
Emerging
42 baaivision/NOVA

[ICLR 2025] Autoregressive Video Generation without Vector Quantization

36
Emerging
43 YBYBZhang/ControlVideo

[ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free...

35
Emerging
44 Vchitect/SEINE

[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative...

35
Emerging
45 Fantasy-AMAP/fantasy-portrait

FantasyPortrait: Enhancing Multi-Character Portrait Animation with...

35
Emerging
46 CIntellifusion/GeometryForcing

[ICLR26] Official implementation of Geometry Forcing: Marrying Video...

35
Emerging
47 alimohammadiamirhossein/smite

Pytorch Implementation of "SMITE: Segment Me In TimE" (ICLR 2025)

35
Emerging
48 nihaomiao/CVPR23_LFDM

The pytorch implementation of our CVPR 2023 paper "Conditional...

34
Emerging
49 bytedance/X-Dyna

[CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation

34
Emerging
50 Zhen-Dong/Magic-Me

Codes for ID-Specific Video Customized Diffusion

34
Emerging
51 showlab/MotionDirector

[ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video...

34
Emerging
52 PhotonAISG/hunyuan-image3-finetune

Finetune HunyuanImage 3.0, a 80B unified understanding and generation model

33
Emerging
53 flymin/MagicDrive-V2

[ICCV 2025] Official implementation of the paper “MagicDrive-V2:...

33
Emerging
54 Kevin-thu/Epona

Official Code for Epona: Autoregressive Diffusion World Model for Autonomous...

33
Emerging
55 vivoCameraResearch/Magic-World

official code for "magicworld: towards long-horizon stability for...

33
Emerging
56 caiyuanhao1998/Open-OmniVCus

OmniVCus: Feedforward Subject-driven Video Customization with Multimodal...

32
Emerging
57 CVL-UESTC/MVAR

ICLR 2026-MVAR: Visual Autoregressive Modeling with Scale and Spatial...

32
Emerging
58 baaivision/URSA

[ICLR 2026] 🐻 Uniform Discrete Diffusion with Metric Path for Video Generation

32
Emerging
59 FoundationVision/FlashVideo

[AAAI-2026]FlashVideo: Flowing Fidelity to Detail for Efficient...

32
Emerging
60 researchmm/MM-Diffusion

[CVPR'23] MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint...

32
Emerging
61 harlanhong/ACTalker

ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head...

31
Emerging
62 JeremyCJM/DiffSHEG

[CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven...

31
Emerging
63 RehgLab/RAVE

RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with...

31
Emerging
64 LinghaoChan/HumanMAC

[ICCV-2023] Official code for work "HumanMAC: Masked Motion Completion for...

31
Emerging
65 haoningwu3639/StoryGen

[CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent...

31
Emerging
66 Kaihua-Chen/diffusion-vas

[CVPR 2025] Official code for Using Diffusion Priors for Video Amodal Segmentation

31
Emerging
67 Reagan1311/Mask2IV

Mask2IV: Interaction-Centric Video Generation via Mask Trajectories (AAAI 2026)

31
Emerging
68 SooLab/Free-Bloom

[NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM...

30
Emerging
69 Yi-Shi94/AMDM

Interactive Character Control with Auto-Regressive Motion Diffusion Models

30
Emerging
70 lixirui142/VidToMe

Official Pytorch Implementation for "VidToMe: Video Token Merging for...

30
Emerging
71 TIGER-AI-Lab/ConsistI2V

ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024]

30
Emerging
72 UuuNyaa/blender_motion_generate_tools

motion_generate_tools is a Blender addon for generate motion using MDM:...

30
Emerging
73 alimama-creative/M3DDM-Video-Outpainting

[ACM MM 2023] Official implementation of "Hierarchical Masked 3D Diffusion...

29
Experimental
74 songweige/content-debiased-fvd

[CVPR 2024] On the Content Bias in Fréchet Video Distance

29
Experimental
75 invictus717/InteractiveVideo

InteractiveVideo: User-Centric Controllable Video Generation with...

29
Experimental
76 knightyxp/VideoGrain

[ICLR 2025] VideoGrain: This repo is the official implementation of...

29
Experimental
77 RQ-Wu/LAMP

[CVPR 2024] | LAMP: Learn a Motion Pattern for Few-Shot Based Video Generation

29
Experimental
78 Da1yuqin/TCDiff

Official code for our AAAI25 oral👑 paper Harmonious Group Choreography with...

29
Experimental
79 sihyun-yu/PVDM

[CVPR'23] Video Probabilistic Diffusion Models in Projected Latent Space

29
Experimental
80 DiffPoseTalk/DiffPoseTalk

DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose...

28
Experimental
81 jpthu17/GraphMotion

[NeurIPS 2023] Act As You Wish: Fine-Grained Control of Motion Diffusion...

28
Experimental
82 leob03/MultimodalDifMotionPred

[CVPR 2025 - HuMoGen] "MDMP: Multi-modal Diffusion for supervised Motion...

28
Experimental
83 vpulab/ovam

Code for the paper Open-Vocabulary Attention Maps with Token Optimization...

28
Experimental
84 lzz19980125/Hunyuan3D-2.1-Windows

A Windows-compatible version of Hunyuan3D-2.1

27
Experimental
85 KevinDayve/VTok

Unofficial implementation of VTok (https://arxiv.org/pdf/2602.04202)

27
Experimental
86 Vicky0522/I2VEdit

[SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via...

27
Experimental
87 jpthu17/DiffusionRet

[ICCV 2023] DiffusionRet: Generative Text-Video Retrieval with Diffusion Model

27
Experimental
88 HyeonHo99/Video-Motion-Customization

VMC: Video Motion Customization using Temporal Attention Adaption for...

27
Experimental
89 SobeyMIL/TVG

code for "TVG: A Training-free Transition Video Generation Method with...

27
Experimental
90 aimagelab/VHS

[CVPR2026 Findings] VHS: Verifier on Hidden States, an efficient...

27
Experimental
91 yrcong/flatten

Pytorch Implementation of FLATTEN: optical FLow-guided ATTENtion for...

26
Experimental
92 JIA-Lab-research/Video-P2P

Video-P2P: Video Editing with Cross-attention Control

26
Experimental
93 harlanhong/ICCV2023-MCNET

The official code of our ICCV2023 work: Implicit Identity Representation...

26
Experimental
94 DuNGEOnmassster/VideoGen-of-Thought

[Neurips 2025 NextVid Workshop Oral✨] Official Implementation of...

26
Experimental
95 liangxuy/ReGenNet

[CVPR 2024] Official implementation of the paper "ReGenNet: Towards Human...

26
Experimental
96 diffusion-motion-transfer/diffusion-motion-transfer

Official Pytorch Implementation for "Space-Time Diffusion Features for...

26
Experimental
97 ziplab/BLADE

This is the official PyTorch implementation of "BLADE: Block-Sparse...

25
Experimental
98 pabloruizponce/MixerMDM

[CVPR 2025] Official Implementation of "MixerMDM: Learnable Composition of...

25
Experimental
99 alibaba/SRDiffusion

Accelerate Video Diffusion Inference via Sketching-Rendering Cooperation

25
Experimental
100 Vchitect/VEnhancer

Official codes of VEnhancer: Generative Space-Time Enhancement for Video Generation

25
Experimental
101 MKFMIKU/vidm

[AAAI23 Oral] Official implementations of Video Implicit Diffusion Models

25
Experimental
102 arthur-qiu/FreeTraj

Code for FreeTraj, a tuning-free method for trajectory-controllable video generation

25
Experimental
103 taco-group/Pulse-of-Motion

The Pulse of Motion: Measuring Physical Frame Rate from Visual Dynamics

25
Experimental
104 xiefan-guo/i4vgen

[arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation

25
Experimental
105 shivangi-aneja/FaceTalk

[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models

24
Experimental
106 masashi-hatano/EgoH4

Official code releasse for "The Invisible EgoHand: 3D Hand Forecasting...

24
Experimental
107 shim0114/T2V-Diffusion-Search

[NeurIPS 2025] Inference-Time Text-to-Video Alignment with Diffusion Latent...

24
Experimental
108 EngineeringAI-LAB/3DXTalker

Official repository for 3DXTalker: An Integrated Framework for Expressive 3D...

23
Experimental
109 desaixie/pa_vdm

CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models:...

23
Experimental
110 SobeyMIL/MVOC

code for "MVOC:atraining-free multiple video object composition method with...

22
Experimental
111 aiiu-lab/MeDM

Official Pytorch Implementation of "MeDM: Mediating Image Diffusion Models...

22
Experimental
112 steve-zeyu-zhang/MotionMamba

🔥 [ECCV 2024] Motion Mamba: Efficient and Long Sequence Motion Generation

22
Experimental
113 QuanjianSong/LightMotion

Official Pytorch Code of the Paper "LightMotion: A Light and Tuning-free...

22
Experimental
114 RafailFridman/SceneScape

Official Pytorch Implementation for "SceneScape: Text-Driven Consistent...

21
Experimental
115 stevenlsw/physgen

PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation (ECCV 2024)

21
Experimental
116 FareedKhan-dev/train-text2video-scratch

This repository provides a PyTorch implementation of a video diffusion...

21
Experimental
117 Ground-A-Video/Ground-A-Video

Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image...

21
Experimental
118 Gen-Verse/HermesFlow

[NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal...

20
Experimental
119 wenhao728/VORTA

The code implementation of paper "VORTA: Efficient Video Diffusion via...

20
Experimental
120 jeffreychou777/GenComm

[NeurIPS 2025] Official repo for paper "Pragmatic Heterogeneous...

20
Experimental
121 finlay-hudson/TABE

Track Anything Behind Everything: Zero-Shot Amodal Video Object Segmentation

20
Experimental
122 zhang-zx/AVID

This respository contains the code for the CVPR 2024 paper AVID: Any-Length...

19
Experimental
123 pittisl/PhyT2V

official code repo of CVPR 2025 paper PhyT2V: LLM-Guided Iterative...

19
Experimental
124 k8xu/amodal

Official code for "Amodal Completion via Progressive Mixed Context...

18
Experimental
125 Fantasy-AMAP/fantasy-talking2

[AAAI 2026] FantasyTalking2: Timestep-Layer Adaptive Preference Optimization...

17
Experimental
126 snap-research/SF-V

This respository contains the code for the NeurIPS 2024 paper SF-V: Single...

17
Experimental
127 kyon317/Learned-Motion-Matching

Learned Motion Matching Implementation

16
Experimental
128 Adamdad/vico

Vico: Compositional Video Generation as Flow Equalization

16
Experimental
129 MOSTAFA1172m/Image-text-video-I2VGENXL

A PyTorch implementation of a text-image to video diffussion model with a...

15
Experimental
130 nysp78/counterfactual-video-generation

A causally faithful framework for counterfactual video generation, guided...

15
Experimental
131 DualParal-Project/DualParal

[AAAI 2026] Minute-Long Videos with Dual Parallelisms

15
Experimental
132 eric-ai-lab/Mojito

Official repo for the paper "Mojito: Motion Trajectory and Intensity Control...

14
Experimental
133 makepixelsdance/makepixelsdance.github.io

Homepage for PixelDance. Paper -> https://arxiv.org/abs/2311.10982

14
Experimental
134 Shaadalam9/traffic-pipeline

This repository contains the code and analysis for the research paper "Deep...

14
Experimental
135 xie-lab-ml/IV-mixed-Sampler

[ICLR2025] IV-Mixed Sampler: Leveraging Image Diffusion Models for Enhanced...

11
Experimental