flash-linear-attention and Star-Attention

Flash-linear-attention provides production-ready implementations of linear attention mechanisms that reduce complexity from quadratic to linear, while Star-Attention focuses on optimizing standard quadratic attention for long sequences through efficient inference techniques—making them **competitors** addressing the same problem (efficient long-context attention) through fundamentally different algorithmic approaches.

Star-Attention
40
Emerging
Maintenance 23/25
Adoption 21/25
Maturity 25/25
Community 20/25
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 12/25
Stars: 4,549
Forks: 431
Downloads: 438,484
Commits (30d): 30
Language: Python
License: MIT
Stars: 392
Forks: 21
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m No Package No Dependents

About flash-linear-attention

fla-org/flash-linear-attention

🚀 Efficient implementations of state-of-the-art linear attention models

Provides PyTorch and Triton kernels for linear attention variants (RetNet, GLA, Mamba, RWKV, DeltaNet, and 20+ emerging architectures), optimized for CPU and GPU across NVIDIA, AMD, and Intel platforms. Includes fused operators, hybrid model support, and variable-length sequence handling to reduce memory overhead during training. Integrates with Hugging Face model hub and the companion `flame` training framework for distributed model development.

About Star-Attention

NVIDIA/Star-Attention

Efficient LLM Inference over Long Sequences

Scores updated daily from GitHub, PyPI, and npm data. How scores work