Tencent-Hunyuan/HunyuanVideo-I2V
HunyuanVideo-I2V: A Customizable Image-to-Video Model based on HunyuanVideo
Extends the base HunyuanVideo framework with token-replacement conditioning to maintain first-frame visual consistency while generating temporally coherent video from images. Supports LoRA fine-tuning for custom motion effects and multi-GPU parallel inference via xDiT, enabling distributed generation across sequences. Integrates with PyTorch and HuggingFace model hub, with community ports for ComfyUI and quantized inference variants.
1,799 stars. No commits in the last 6 months.
Stars
1,799
Forks
189
Language
Python
License
—
Category
Last pushed
May 20, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/Tencent-Hunyuan/HunyuanVideo-I2V"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
ModelTC/LightX2V
Light Image Video Generation Inference Framework
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators