open-mmlab/PIA
[CVPR 2024] PIA, your Personalized Image Animator. Animate your images by text prompt, combing with Dreambooth, achieving stunning videos. PIA,你的个性化图像动画生成器,利用文本提示将图像变为奇妙的动画
Implements plug-and-play motion modules injected into Stable Diffusion's UNet to decouple motion generation from content preservation, enabling fine-grained control over motion magnitude while maintaining image fidelity. Integrates with DreamBooth-LoRA for personalized style adaptation and leverages PyTorch 2.0's scaled dot-product attention for memory-efficient inference on consumer GPUs (16GB VRAM for 1024×1024 images). Supports multi-framework deployment via HuggingFace, Replicate, and OpenXLab with configurable YAML-based inference pipelines.
978 stars. No commits in the last 6 months.
Stars
978
Forks
73
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 05, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/open-mmlab/PIA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...