sergeytulyakov/mocogan
MoCoGAN: Decomposing Motion and Content for Video Generation
Decomposes video generation into separate latent codes for motion dynamics and static content, enabling independent control—e.g., generating the same subject performing different actions or vice versa. The architecture uses a GAN framework where motion and content codes are processed through distinct pathways before synthesis. Demonstrated on facial expressions, human actions, and large-scale TaiChi datasets with disentangled generative control.
602 stars. No commits in the last 6 months.
Stars
602
Forks
113
Language
Python
License
—
Category
Last pushed
Dec 17, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/sergeytulyakov/mocogan"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
AliaksandrSiarohin/first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
kenziyuliu/MS-G3D
[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for...
yoyo-nb/Thin-Plate-Spline-Motion-Model
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
DK-Jang/motion_puzzle
Motion Puzzle - Official PyTorch implementation
paulstarke/PhaseBetweener
Creating animation sequences between sparse key frames using motion phase features.