hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
Supports full model fine-tuning and LoRA adaptation for video diffusion transformers, alongside Distribution Matching Distillation and sparse attention techniques achieving >50x denoising speedup. Provides optimized inference through sequence parallelism and multiple attention backends (including Video Sparse Attention), with a Python API and CLI supporting H100/A100/4090 GPUs across Linux/Windows/macOS. Integrates with Hugging Face model hub and supports both autoregressive and bidirectional video generation architectures.
3,232 stars and 1,618 monthly downloads. Used by 1 other package. Actively maintained with 47 commits in the last 30 days. Available on PyPI.
Stars
3,232
Forks
286
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 17, 2026
Monthly downloads
1,618
Commits (30d)
47
Dependencies
44
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/hao-ai-lab/FastVideo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
ModelTC/LightX2V
Light Image Video Generation Inference Framework
Lightricks/LTX-Video
Official repository for LTX-Video
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators