mit-han-lab/lpd
[ICLR 2026 Oral] Locality-aware Parallel Decoding for Efficient Autoregressive Image Generation
Introduces **Flexible Parallelized Autoregressive Modeling** with learnable position query tokens enabling arbitrary generation ordering and mutual visibility among concurrent tokens, paired with **Locality-aware Generation Ordering** that minimizes dependencies within groups while maximizing contextual support. Reduces generation steps from 256 to 20 (256×256) and 1024 to 48 (512×512) on ImageNet class-conditional generation, achieving 3.4x+ latency improvements over prior parallelized autoregressive approaches. Built on discrete tokenization (LlamaGen VQ-GAN) with pre-trained models from 337M to 1.4B parameters available on Hugging Face.
Stars
91
Forks
7
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/mit-han-lab/lpd"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
PaddlePaddle/PaddleMIX
Paddle Multimodal Integration and eXploration, supporting mainstream multi-modal tasks,...
UCSC-VLAA/story-iter
[ICLR 2026] A Training-free Iterative Framework for Long Story Visualization
keivalya/mini-vla
a minimal, beginner-friendly VLA to show how robot policies can fuse images, text, and states to...
adobe-research/custom-diffusion
Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
byliutao/1Prompt1Story
🔥ICLR 2025 (Spotlight) One-Prompt-One-Story: Free-Lunch Consistent Text-to-Image Generation...