thu-ml/SageAttention

[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.

57
/ 100
Established

3,213 stars.

No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

3,213

Forks

366

Language

Cuda

License

Apache-2.0

Last pushed

Jan 17, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/thu-ml/SageAttention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.