OpenVGLab/OmniLottie
[CVPR 2026🔥] 🧑🎨 OmniLottie, an open-sourced multi-modal instructed vector animation generator that produces Lottie JSONs.
Leverages Vision-Language Models to generate Lottie animations from multimodal inputs (text, images, videos) using parameterized Lottie tokens, enabling generation of complex vector animations in a single end-to-end pipeline. Includes the MMLottie-2M dataset (2M annotated animations) and MMLottieBench evaluation protocol for standardized benchmarking of multimodal vector animation generation. Supports both original PyTorch and HuggingFace safetensors formats with seamless integration via `from_pretrained()` API, plus community-contributed ComfyUI plugin support.
503 stars. Actively maintained with 12 commits in the last 30 days.
Stars
503
Forks
27
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 07, 2026
Commits (30d)
12
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/generative-ai/OpenVGLab/OmniLottie"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
Mrkomiljon/awesome-generative-ai
Multimodal generative AI resources : talking heads, STT, TTS, image & video generation, and more.
NVIDIA/Maya-ACE
Maya-ACE: A Reference Client Implementation for NVIDIA ACE Audio2Face Service
jdh-algo/JoyHallo
JoyHallo: Digital human model for Mandarin
F-R-L/forge-film
Multi-model DAG-driven parallel AI film generation — parallel speedup scales with scene...
michaelzhang-ai/Speech2Video
ACCV 2020 "Speech2Video Synthesis with 3D Skeleton Regularization and Expressive Body Poses"