MiniMax-01 and MiniMax-M1

MiniMax-01
48
Emerging
MiniMax-M1
46
Emerging
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 20/25
Maintenance 2/25
Adoption 10/25
Maturity 15/25
Community 19/25
Stars: 3,363
Forks: 319
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 3,115
Forks: 276
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About MiniMax-01

MiniMax-AI/MiniMax-01

The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention

Lightning Attention and Softmax hybrid mechanisms enable efficient long-context processing up to 4M tokens at inference, while Mixture-of-Experts routing activates only 45.9B of 456B parameters per token for computational efficiency. MiniMax-VL-01 extends this with dynamic multi-resolution image encoding (336×336 to 2016×2016) using a Vision Transformer projector, supporting complex multimodal reasoning tasks. Models are available via Hugging Face, the MiniMax API platform, and integrate with MCP for Claude desktop environments.

About MiniMax-M1

MiniMax-AI/MiniMax-M1

MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.

Scores updated daily from GitHub, PyPI, and npm data. How scores work