flash-linear-attention and Star-Attention
Flash-linear-attention provides production-ready implementations of linear attention mechanisms that reduce complexity from quadratic to linear, while Star-Attention focuses on optimizing standard quadratic attention for long sequences through efficient inference techniques—making them **competitors** addressing the same problem (efficient long-context attention) through fundamentally different algorithmic approaches.
About flash-linear-attention
fla-org/flash-linear-attention
🚀 Efficient implementations of state-of-the-art linear attention models
Provides PyTorch and Triton kernels for linear attention variants (RetNet, GLA, Mamba, RWKV, DeltaNet, and 20+ emerging architectures), optimized for CPU and GPU across NVIDIA, AMD, and Intel platforms. Includes fused operators, hybrid model support, and variable-length sequence handling to reduce memory overhead during training. Integrates with Hugging Face model hub and the companion `flame` training framework for distributed model development.
About Star-Attention
NVIDIA/Star-Attention
Efficient LLM Inference over Long Sequences
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work