BICLab/MetaLA
Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)
Unified framework for linear attention mechanisms that addresses three design constraints—dynamic memory, static approximation ability, and parameter efficiency—through a meta-learning approach. Implemented as a drop-in GPT-NeoX module using Flash Linear Attention and causal convolutions for efficient inference, with HuggingFace-compatible checkpoints (380M–3B parameters) trained on 300B tokens and supporting bf16/fp16 precision.
No commits in the last 6 months.
Stars
35
Forks
2
Language
Python
License
—
Category
Last pushed
Jan 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/BICLab/MetaLA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fla-org/flash-linear-attention
🚀 Efficient implementations of state-of-the-art linear attention models
thu-ml/SageAttention
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x...
thu-ml/SpargeAttn
[ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.
fla-org/flame
🔥 A minimal training framework for scaling FLA models
foundation-model-stack/fms-fsdp
🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for...