MiniMax-01 and MiniMax-M1
About MiniMax-01
MiniMax-AI/MiniMax-01
The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention
Lightning Attention and Softmax hybrid mechanisms enable efficient long-context processing up to 4M tokens at inference, while Mixture-of-Experts routing activates only 45.9B of 456B parameters per token for computational efficiency. MiniMax-VL-01 extends this with dynamic multi-resolution image encoding (336×336 to 2016×2016) using a Vision Transformer projector, supporting complex multimodal reasoning tasks. Models are available via Hugging Face, the MiniMax API platform, and integrate with MCP for Claude desktop environments.
About MiniMax-M1
MiniMax-AI/MiniMax-M1
MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
Scores updated daily from GitHub, PyPI, and npm data. How scores work