MoonshotAI/MoBA

MoBA: Mixture of Block Attention for Long-Context LLMs

44
/ 100
Emerging

Divides full context into learnable sparse blocks where each query token selects the most relevant KV blocks via a parameter-less top-k gating mechanism, achieving up to 40x speedup on long sequences. Integrates with HuggingFace Transformers and Flash Attention 2.6.3, offering both naive (mask-based) and optimized production implementations that seamlessly switch between full and sparse attention modes without requiring architectural changes.

2,076 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 18 / 25

How are scores calculated?

Stars

2,076

Forks

136

Language

Python

License

MIT

Last pushed

Apr 03, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MoonshotAI/MoBA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.