MiniMax-AI/MiniMax-01

The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention

48
/ 100
Emerging

Lightning Attention and Softmax hybrid mechanisms enable efficient long-context processing up to 4M tokens at inference, while Mixture-of-Experts routing activates only 45.9B of 456B parameters per token for computational efficiency. MiniMax-VL-01 extends this with dynamic multi-resolution image encoding (336×336 to 2016×2016) using a Vision Transformer projector, supporting complex multimodal reasoning tasks. Models are available via Hugging Face, the MiniMax API platform, and integrate with MCP for Claude desktop environments.

3,363 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

3,363

Forks

319

Language

Python

License

MIT

Last pushed

Jul 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MiniMax-AI/MiniMax-01"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.