Video Editing Diffusion Diffusion Models
Advanced video editing and manipulation using diffusion models, including motion control, composition, object editing, and frame interpolation. Does NOT include general video generation from text, basic inpainting tools, or video segmentation without editing capabilities.
There are 135 video editing diffusion models tracked. 2 score above 70 (verified tier). The highest-rated is hao-ai-lab/FastVideo at 85/100 with 3,232 stars and 1,618 monthly downloads. 7 of the top 10 are actively maintained.
Get all 135 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=diffusion&subcategory=video-editing-diffusion&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation. |
|
Verified |
| 2 |
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models |
|
Verified |
| 3 |
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model |
|
Established |
| 4 |
ModelTC/LightX2V
Light Image Video Generation Inference Framework |
|
Established |
| 5 |
Lightricks/LTX-Video
Official repository for LTX-Video |
|
Established |
| 6 |
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators |
|
Established |
| 7 |
Tencent-Hunyuan/HunyuanImage-3.0
HunyuanImage-3.0: A Powerful Native Multimodal Model for Image Generation |
|
Established |
| 8 |
thu-ml/DiT-Extrapolation
Official implementation for "RIFLEx: A Free Lunch for Length Extrapolation... |
|
Established |
| 9 |
Tencent-Hunyuan/HunyuanVideo
HunyuanVideo: A Systematic Framework For Large Video Generation Model |
|
Established |
| 10 |
OpenMOSS/MOVA
MOVA: Towards Scalable and Synchronized Video–Audio Generation |
|
Established |
| 11 |
PKU-YuanGroup/ConsisID
[CVPR 2025 Highlight🔥] Identity-Preserving Text-to-Video Generation by... |
|
Emerging |
| 12 |
Fantasy-AMAP/fantasy-talking
[ACM MM 2025] FantasyTalking: Realistic Talking Portrait Generation via... |
|
Emerging |
| 13 |
Advocate99/DiffGesture
[CVPR'2023] Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation |
|
Emerging |
| 14 |
SandAI-org/MAGI-1
MAGI-1: Autoregressive Video Generation at Scale |
|
Emerging |
| 15 |
Tencent/MimicMotion
High-Quality Human Motion Video Generation with Confidence-aware Pose Guidance |
|
Emerging |
| 16 |
YanWenKun/Hunyuan3D-2-WinPortable
📦Portable package for running Hunyuan3D 2.0/2.1 on Windows. | 混元 3D 2.0/2.1 整合包 |
|
Emerging |
| 17 |
YoungSeng/DiffuseStyleGesture
DiffuseStyleGesture: Stylized Audio-Driven Co-Speech Gesture Generation with... |
|
Emerging |
| 18 |
Tencent-Hunyuan/HunyuanCustom
HunyuanCustom: A Multimodal-Driven Architecture for Customized Video Generation |
|
Emerging |
| 19 |
G-U-N/Gen-L-Video
The official implementation for "Gen-L-Video: Multi-Text to Long Video... |
|
Emerging |
| 20 |
zli12321/FFGO-Video-Customization
Video Content Customization Using First Frame |
|
Emerging |
| 21 |
Stanford-TML/EDGE
Official PyTorch Implementation of EDGE (CVPR 2023) |
|
Emerging |
| 22 |
OpenDCAI/OpenWorldLib
Unified Codebase for Advanced World Models. |
|
Emerging |
| 23 |
EzioBy/Ditto
[CVPR 2026] Ditto: Scaling Instruction-Based Video Editing with a... |
|
Emerging |
| 24 |
Tencent-Hunyuan/HunyuanVideo-I2V
HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo |
|
Emerging |
| 25 |
Tencent-Hunyuan/HunyuanImage-2.1
HunyuanImage-2.1: An Efficient Diffusion Model for High-Resolution (2K)... |
|
Emerging |
| 26 |
TencentARC/GenCompositor
[ICLR 2026] GenCompositor: Generative Video Compositing with Diffusion Transformer |
|
Emerging |
| 27 |
nv-tlabs/ChronoEdit
[ICLR 2026] ChronoEdit: Towards Temporal Reasoning for Image Editing and... |
|
Emerging |
| 28 |
mit-han-lab/radial-attention
[NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay... |
|
Emerging |
| 29 |
SenseTime-FVG/OpenDWM
An open source code repository of driving world models, with training,... |
|
Emerging |
| 30 |
knightyxp/VideoCoF
[CVPR 2026] VideoCoF: Unified Video Editing with Temporal Reasoner |
|
Emerging |
| 31 |
omerbt/TokenFlow
Official Pytorch Implementation for "TokenFlow: Consistent Diffusion... |
|
Emerging |
| 32 |
PangzeCheung/OmniTransfer
[CVPR 2026] OmniTransfer: All-in-one Framework for Spatio-temporal Video Transfer |
|
Emerging |
| 33 |
QuanjianSong/UniVST
[TPAMI 2025] Official Pytorch Code of the Paper "UniVST: A Unified Framework... |
|
Emerging |
| 34 |
ChenyangQiQi/FateZero
[ICCV 2023 Oral] "FateZero: Fusing Attentions for Zero-shot Text-based Video Editing" |
|
Emerging |
| 35 |
FareedKhan-dev/text2video-from-scratch
A Straightforward, Step-by-Step Implementation of a Video Diffusion Model |
|
Emerging |
| 36 |
foivospar/NED
PyTorch implementation for NED (CVPR 2022). It can be used to manipulate the... |
|
Emerging |
| 37 |
ali-vilab/VGen
Official repo for VGen: a holistic video generation ecosystem for video... |
|
Emerging |
| 38 |
hustvl/MobileI2V
[ArXiv 2025] MobileI2V: Fast and High-Resolution Image-to-Video on Mobile Devices |
|
Emerging |
| 39 |
text2cinemagraph/text2cinemagraph
Text2Cinemagraph: Text-Guided Synthesis of Eulerian Cinemagraphs [SIGGRAPH ASIA 2023] |
|
Emerging |
| 40 |
menyifang/MIMO
Official implementation of "MIMO: Controllable Character Video Synthesis... |
|
Emerging |
| 41 |
ali-vilab/videocomposer
Official repo for VideoComposer: Compositional Video Synthesis with Motion... |
|
Emerging |
| 42 |
baaivision/NOVA
[ICLR 2025] Autoregressive Video Generation without Vector Quantization |
|
Emerging |
| 43 |
YBYBZhang/ControlVideo
[ICLR 2024] Official pytorch implementation of "ControlVideo: Training-free... |
|
Emerging |
| 44 |
Vchitect/SEINE
[ICLR 2024] SEINE: Short-to-Long Video Diffusion Model for Generative... |
|
Emerging |
| 45 |
Fantasy-AMAP/fantasy-portrait
FantasyPortrait: Enhancing Multi-Character Portrait Animation with... |
|
Emerging |
| 46 |
CIntellifusion/GeometryForcing
[ICLR26] Official implementation of Geometry Forcing: Marrying Video... |
|
Emerging |
| 47 |
alimohammadiamirhossein/smite
Pytorch Implementation of "SMITE: Segment Me In TimE" (ICLR 2025) |
|
Emerging |
| 48 |
nihaomiao/CVPR23_LFDM
The pytorch implementation of our CVPR 2023 paper "Conditional... |
|
Emerging |
| 49 |
bytedance/X-Dyna
[CVPR 2025 Highlight] X-Dyna: Expressive Dynamic Human Image Animation |
|
Emerging |
| 50 |
Zhen-Dong/Magic-Me
Codes for ID-Specific Video Customized Diffusion |
|
Emerging |
| 51 |
showlab/MotionDirector
[ECCV 2024 Oral] MotionDirector: Motion Customization of Text-to-Video... |
|
Emerging |
| 52 |
PhotonAISG/hunyuan-image3-finetune
Finetune HunyuanImage 3.0, a 80B unified understanding and generation model |
|
Emerging |
| 53 |
flymin/MagicDrive-V2
[ICCV 2025] Official implementation of the paper “MagicDrive-V2:... |
|
Emerging |
| 54 |
Kevin-thu/Epona
Official Code for Epona: Autoregressive Diffusion World Model for Autonomous... |
|
Emerging |
| 55 |
vivoCameraResearch/Magic-World
official code for "magicworld: towards long-horizon stability for... |
|
Emerging |
| 56 |
caiyuanhao1998/Open-OmniVCus
OmniVCus: Feedforward Subject-driven Video Customization with Multimodal... |
|
Emerging |
| 57 |
CVL-UESTC/MVAR
ICLR 2026-MVAR: Visual Autoregressive Modeling with Scale and Spatial... |
|
Emerging |
| 58 |
baaivision/URSA
[ICLR 2026] 🐻 Uniform Discrete Diffusion with Metric Path for Video Generation |
|
Emerging |
| 59 |
FoundationVision/FlashVideo
[AAAI-2026]FlashVideo: Flowing Fidelity to Detail for Efficient... |
|
Emerging |
| 60 |
researchmm/MM-Diffusion
[CVPR'23] MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint... |
|
Emerging |
| 61 |
harlanhong/ACTalker
ICCV 2025 ACTalker: an end-to-end video diffusion framework for talking head... |
|
Emerging |
| 62 |
JeremyCJM/DiffSHEG
[CVPR'24] DiffSHEG: A Diffusion-Based Approach for Real-Time Speech-driven... |
|
Emerging |
| 63 |
RehgLab/RAVE
RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with... |
|
Emerging |
| 64 |
LinghaoChan/HumanMAC
[ICCV-2023] Official code for work "HumanMAC: Masked Motion Completion for... |
|
Emerging |
| 65 |
haoningwu3639/StoryGen
[CVPR 2024] Intelligent Grimm - Open-ended Visual Storytelling via Latent... |
|
Emerging |
| 66 |
Kaihua-Chen/diffusion-vas
[CVPR 2025] Official code for Using Diffusion Priors for Video Amodal Segmentation |
|
Emerging |
| 67 |
Reagan1311/Mask2IV
Mask2IV: Interaction-Centric Video Generation via Mask Trajectories (AAAI 2026) |
|
Emerging |
| 68 |
SooLab/Free-Bloom
[NeurIPS 2023] Free-Bloom: Zero-Shot Text-to-Video Generator with LLM... |
|
Emerging |
| 69 |
Yi-Shi94/AMDM
Interactive Character Control with Auto-Regressive Motion Diffusion Models |
|
Emerging |
| 70 |
lixirui142/VidToMe
Official Pytorch Implementation for "VidToMe: Video Token Merging for... |
|
Emerging |
| 71 |
TIGER-AI-Lab/ConsistI2V
ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation [TMLR 2024] |
|
Emerging |
| 72 |
UuuNyaa/blender_motion_generate_tools
motion_generate_tools is a Blender addon for generate motion using MDM:... |
|
Emerging |
| 73 |
alimama-creative/M3DDM-Video-Outpainting
[ACM MM 2023] Official implementation of "Hierarchical Masked 3D Diffusion... |
|
Experimental |
| 74 |
songweige/content-debiased-fvd
[CVPR 2024] On the Content Bias in Fréchet Video Distance |
|
Experimental |
| 75 |
invictus717/InteractiveVideo
InteractiveVideo: User-Centric Controllable Video Generation with... |
|
Experimental |
| 76 |
knightyxp/VideoGrain
[ICLR 2025] VideoGrain: This repo is the official implementation of... |
|
Experimental |
| 77 |
RQ-Wu/LAMP
[CVPR 2024] | LAMP: Learn a Motion Pattern for Few-Shot Based Video Generation |
|
Experimental |
| 78 |
Da1yuqin/TCDiff
Official code for our AAAI25 oral👑 paper Harmonious Group Choreography with... |
|
Experimental |
| 79 |
sihyun-yu/PVDM
[CVPR'23] Video Probabilistic Diffusion Models in Projected Latent Space |
|
Experimental |
| 80 |
DiffPoseTalk/DiffPoseTalk
DiffPoseTalk: Speech-Driven Stylistic 3D Facial Animation and Head Pose... |
|
Experimental |
| 81 |
jpthu17/GraphMotion
[NeurIPS 2023] Act As You Wish: Fine-Grained Control of Motion Diffusion... |
|
Experimental |
| 82 |
leob03/MultimodalDifMotionPred
[CVPR 2025 - HuMoGen] "MDMP: Multi-modal Diffusion for supervised Motion... |
|
Experimental |
| 83 |
vpulab/ovam
Code for the paper Open-Vocabulary Attention Maps with Token Optimization... |
|
Experimental |
| 84 |
lzz19980125/Hunyuan3D-2.1-Windows
A Windows-compatible version of Hunyuan3D-2.1 |
|
Experimental |
| 85 |
KevinDayve/VTok
Unofficial implementation of VTok (https://arxiv.org/pdf/2602.04202) |
|
Experimental |
| 86 |
Vicky0522/I2VEdit
[SIGGRAPH Asia 2024] I2VEdit: First-Frame-Guided Video Editing via... |
|
Experimental |
| 87 |
jpthu17/DiffusionRet
[ICCV 2023] DiffusionRet: Generative Text-Video Retrieval with Diffusion Model |
|
Experimental |
| 88 |
HyeonHo99/Video-Motion-Customization
VMC: Video Motion Customization using Temporal Attention Adaption for... |
|
Experimental |
| 89 |
SobeyMIL/TVG
code for "TVG: A Training-free Transition Video Generation Method with... |
|
Experimental |
| 90 |
aimagelab/VHS
[CVPR2026 Findings] VHS: Verifier on Hidden States, an efficient... |
|
Experimental |
| 91 |
yrcong/flatten
Pytorch Implementation of FLATTEN: optical FLow-guided ATTENtion for... |
|
Experimental |
| 92 |
JIA-Lab-research/Video-P2P
Video-P2P: Video Editing with Cross-attention Control |
|
Experimental |
| 93 |
harlanhong/ICCV2023-MCNET
The official code of our ICCV2023 work: Implicit Identity Representation... |
|
Experimental |
| 94 |
DuNGEOnmassster/VideoGen-of-Thought
[Neurips 2025 NextVid Workshop Oral✨] Official Implementation of... |
|
Experimental |
| 95 |
liangxuy/ReGenNet
[CVPR 2024] Official implementation of the paper "ReGenNet: Towards Human... |
|
Experimental |
| 96 |
diffusion-motion-transfer/diffusion-motion-transfer
Official Pytorch Implementation for "Space-Time Diffusion Features for... |
|
Experimental |
| 97 |
ziplab/BLADE
This is the official PyTorch implementation of "BLADE: Block-Sparse... |
|
Experimental |
| 98 |
pabloruizponce/MixerMDM
[CVPR 2025] Official Implementation of "MixerMDM: Learnable Composition of... |
|
Experimental |
| 99 |
alibaba/SRDiffusion
Accelerate Video Diffusion Inference via Sketching-Rendering Cooperation |
|
Experimental |
| 100 |
Vchitect/VEnhancer
Official codes of VEnhancer: Generative Space-Time Enhancement for Video Generation |
|
Experimental |
| 101 |
MKFMIKU/vidm
[AAAI23 Oral] Official implementations of Video Implicit Diffusion Models |
|
Experimental |
| 102 |
arthur-qiu/FreeTraj
Code for FreeTraj, a tuning-free method for trajectory-controllable video generation |
|
Experimental |
| 103 |
taco-group/Pulse-of-Motion
The Pulse of Motion: Measuring Physical Frame Rate from Visual Dynamics |
|
Experimental |
| 104 |
xiefan-guo/i4vgen
[arXiv 2024] I4VGen: Image as Free Stepping Stone for Text-to-Video Generation |
|
Experimental |
| 105 |
shivangi-aneja/FaceTalk
[CVPR 2024] FaceTalk: Audio-Driven Motion Diffusion for Neural Parametric Head Models |
|
Experimental |
| 106 |
masashi-hatano/EgoH4
Official code releasse for "The Invisible EgoHand: 3D Hand Forecasting... |
|
Experimental |
| 107 |
shim0114/T2V-Diffusion-Search
[NeurIPS 2025] Inference-Time Text-to-Video Alignment with Diffusion Latent... |
|
Experimental |
| 108 |
EngineeringAI-LAB/3DXTalker
Official repository for 3DXTalker: An Integrated Framework for Expressive 3D... |
|
Experimental |
| 109 |
desaixie/pa_vdm
CVPRW 2025 paper Progressive Autoregressive Video Diffusion Models:... |
|
Experimental |
| 110 |
SobeyMIL/MVOC
code for "MVOC:atraining-free multiple video object composition method with... |
|
Experimental |
| 111 |
aiiu-lab/MeDM
Official Pytorch Implementation of "MeDM: Mediating Image Diffusion Models... |
|
Experimental |
| 112 |
steve-zeyu-zhang/MotionMamba
🔥 [ECCV 2024] Motion Mamba: Efficient and Long Sequence Motion Generation |
|
Experimental |
| 113 |
QuanjianSong/LightMotion
Official Pytorch Code of the Paper "LightMotion: A Light and Tuning-free... |
|
Experimental |
| 114 |
RafailFridman/SceneScape
Official Pytorch Implementation for "SceneScape: Text-Driven Consistent... |
|
Experimental |
| 115 |
stevenlsw/physgen
PhysGen: Rigid-Body Physics-Grounded Image-to-Video Generation (ECCV 2024) |
|
Experimental |
| 116 |
FareedKhan-dev/train-text2video-scratch
This repository provides a PyTorch implementation of a video diffusion... |
|
Experimental |
| 117 |
Ground-A-Video/Ground-A-Video
Ground-A-Video: Zero-shot Grounded Video Editing using Text-to-image... |
|
Experimental |
| 118 |
Gen-Verse/HermesFlow
[NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal... |
|
Experimental |
| 119 |
wenhao728/VORTA
The code implementation of paper "VORTA: Efficient Video Diffusion via... |
|
Experimental |
| 120 |
jeffreychou777/GenComm
[NeurIPS 2025] Official repo for paper "Pragmatic Heterogeneous... |
|
Experimental |
| 121 |
finlay-hudson/TABE
Track Anything Behind Everything: Zero-Shot Amodal Video Object Segmentation |
|
Experimental |
| 122 |
zhang-zx/AVID
This respository contains the code for the CVPR 2024 paper AVID: Any-Length... |
|
Experimental |
| 123 |
pittisl/PhyT2V
official code repo of CVPR 2025 paper PhyT2V: LLM-Guided Iterative... |
|
Experimental |
| 124 |
k8xu/amodal
Official code for "Amodal Completion via Progressive Mixed Context... |
|
Experimental |
| 125 |
Fantasy-AMAP/fantasy-talking2
[AAAI 2026] FantasyTalking2: Timestep-Layer Adaptive Preference Optimization... |
|
Experimental |
| 126 |
snap-research/SF-V
This respository contains the code for the NeurIPS 2024 paper SF-V: Single... |
|
Experimental |
| 127 |
kyon317/Learned-Motion-Matching
Learned Motion Matching Implementation |
|
Experimental |
| 128 |
Adamdad/vico
Vico: Compositional Video Generation as Flow Equalization |
|
Experimental |
| 129 |
MOSTAFA1172m/Image-text-video-I2VGENXL
A PyTorch implementation of a text-image to video diffussion model with a... |
|
Experimental |
| 130 |
nysp78/counterfactual-video-generation
A causally faithful framework for counterfactual video generation, guided... |
|
Experimental |
| 131 |
DualParal-Project/DualParal
[AAAI 2026] Minute-Long Videos with Dual Parallelisms |
|
Experimental |
| 132 |
eric-ai-lab/Mojito
Official repo for the paper "Mojito: Motion Trajectory and Intensity Control... |
|
Experimental |
| 133 |
makepixelsdance/makepixelsdance.github.io
Homepage for PixelDance. Paper -> https://arxiv.org/abs/2311.10982 |
|
Experimental |
| 134 |
Shaadalam9/traffic-pipeline
This repository contains the code and analysis for the research paper "Deep... |
|
Experimental |
| 135 |
xie-lab-ml/IV-mixed-Sampler
[ICLR2025] IV-Mixed Sampler: Leveraging Image Diffusion Models for Enhanced... |
|
Experimental |