AliaksandrSiarohin/first-order-model
This repository contains the source code for the paper First Order Motion Model for Image Animation
Leverages keypoint-based motion transfer to animate still images using driving video sequences, with support for both absolute and relative coordinate modes to balance flexibility against pose consistency. The architecture uses a combination of keypoint detection, local affine transformations, and neural rendering, trained separately for domain-specific tasks (facial animation on VoxCeleb, fashion, animal motion). Includes pre-trained checkpoints, Docker/Colab deployment options, and integrations with face-alignment and motion co-segmentation libraries for extended capabilities like unsupervised face-swapping.
15,007 stars. No commits in the last 6 months.
Stars
15,007
Forks
3,278
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 14, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/AliaksandrSiarohin/first-order-model"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
kenziyuliu/MS-G3D
[CVPR 2020 Oral] PyTorch implementation of "Disentangling and Unifying Graph Convolutions for...
yoyo-nb/Thin-Plate-Spline-Motion-Model
[CVPR 2022] Thin-Plate Spline Motion Model for Image Animation.
sergeytulyakov/mocogan
MoCoGAN: Decomposing Motion and Content for Video Generation
DK-Jang/motion_puzzle
Motion Puzzle - Official PyTorch implementation
paulstarke/PhaseBetweener
Creating animation sequences between sparse key frames using motion phase features.