DAGroup-PKU/MHLA

MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head (ICLR 2026)

43
/ 100
Emerging

Applies token-level multi-head mechanisms to linear attention, enabling quadratic-complexity performance across diverse modalities—image classification, diffusion models (DiT), language modeling, and video generation—while achieving 2.2× speedup over Flash Attention on long sequences. Implemented as a drop-in replacement operator compatible with timm, DiT, and Sana frameworks, with pretrained weights available on HuggingFace.

133 stars.

No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 13 / 25
Community 10 / 25

How are scores calculated?

Stars

133

Forks

8

Language

Python

License

MIT

Last pushed

Feb 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DAGroup-PKU/MHLA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.