G-U-N/AnimateLCM
[SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data
Implements consistency model-based acceleration for video diffusion through decoupled learning—separately optimizing spatial image generation and temporal motion priors—enabling 4-step inference on text-to-video, image-to-video, and video stylization tasks. Provides three model variants (T2V, SVD-xt, I2V) with spatial LoRA weights and motion modules compatible with Stable Diffusion adapters, ControlNet, and IP-Adapter in zero-shot mode, integrated into diffusers and ComfyUI ecosystems.
660 stars. No commits in the last 6 months.
Stars
660
Forks
47
Language
Python
License
MIT
Category
Last pushed
Oct 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/G-U-N/AnimateLCM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
pathak22/context-encoder
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs
knazeri/edge-connect
EdgeConnect: Structure Guided Image Inpainting using Edge Prediction, ICCV 2019...
shepnerd/inpainting_gmcnn
Image Inpainting via Generative Multi-column Convolutional Neural Networks, NeurIPS2018
youyuge34/Anime-InPainting
An application tool of edge-connect, which can do anime inpainting and drawing. 动漫人物图片自动修复,去马赛克,填补,去瑕疵
otenim/GLCIC-PyTorch
A High-Quality PyTorch Implementation of "Globally and Locally Consistent Image Completion".