G-U-N/AnimateLCM

[SIGGRAPH ASIA 2024 TCS] AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data

42
/ 100
Emerging

Implements consistency model-based acceleration for video diffusion through decoupled learning—separately optimizing spatial image generation and temporal motion priors—enabling 4-step inference on text-to-video, image-to-video, and video stylization tasks. Provides three model variants (T2V, SVD-xt, I2V) with spatial LoRA weights and motion modules compatible with Stable Diffusion adapters, ControlNet, and IP-Adapter in zero-shot mode, integrated into diffusers and ComfyUI ecosystems.

660 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

660

Forks

47

Language

Python

License

MIT

Category

image-inpainting

Last pushed

Oct 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/G-U-N/AnimateLCM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.