mit-han-lab/radial-attention

[NeurIPS 2025] Radial Attention: O(nlogn) Sparse Attention with Energy Decay for Long Video Generation

45
/ 100
Emerging

Implements physics-inspired sparse attention masks with exponentially decaying compute density across temporal bands, integrating with video diffusion models (Wan2.1, HunyuanVideo, Mochi-1) and optimized backends including SageAttention and FlashInfer. Achieves O(n log n) complexity through static spatiotemporal masking that scales pre-trained models to 4× longer sequences via lightweight LoRA tuning, with multi-GPU support via xDiT's Ulysses sequence parallelism.

587 stars.

No Package No Dependents
Maintenance 6 / 25
Adoption 10 / 25
Maturity 15 / 25
Community 14 / 25

How are scores calculated?

Stars

587

Forks

33

Language

Python

License

Apache-2.0

Last pushed

Nov 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/diffusion/mit-han-lab/radial-attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.