SandAI-org/MAGI-1
MAGI-1: Autoregressive Video Generation at Scale
Implements a Transformer-based VAE architecture with causal temporal modeling to predict video in autoregressive chunks, enabling streaming generation and long-horizon synthesis. Supports diverse conditioning modalities including text prompts, images, and chunk-wise instructions for fine-grained control over scene transitions and video attributes. Available as distilled 4.5B variants and integrates with ComfyUI workflows plus Hugging Face ecosystem for inference and deployment.
3,663 stars. No commits in the last 6 months.
Stars
3,663
Forks
235
Language
Python
License
Apache-2.0
Category
Last pushed
Jun 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/SandAI-org/MAGI-1"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hao-ai-lab/FastVideo
A unified inference and post-training framework for accelerated video generation.
thu-ml/TurboDiffusion
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
ModelTC/LightX2V
Light Image Video Generation Inference Framework
PKU-YuanGroup/Helios
Helios: Real Real-Time Long Video Generation Model
PKU-YuanGroup/MagicTime
[TPAMI 2025🔥] MagicTime: Time-lapse Video Generation Models as Metamorphic Simulators