MiniMax-AI/MiniMax-01
The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention
Lightning Attention and Softmax hybrid mechanisms enable efficient long-context processing up to 4M tokens at inference, while Mixture-of-Experts routing activates only 45.9B of 456B parameters per token for computational efficiency. MiniMax-VL-01 extends this with dynamic multi-resolution image encoding (336×336 to 2016×2016) using a Vision Transformer projector, supporting complex multimodal reasoning tasks. Models are available via Hugging Face, the MiniMax API platform, and integrate with MCP for Claude desktop environments.
3,363 stars. No commits in the last 6 months.
Stars
3,363
Forks
319
Language
Python
License
MIT
Category
Last pushed
Jul 07, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MiniMax-AI/MiniMax-01"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
NX-AI/xlstm
Official repository of the xLSTM.
DashyDashOrg/pandas-llm
Pandas-LLM
MiniMax-AI/MiniMax-M1
MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
wxhcore/bumblecore
An LLM training framework built from the ground up, featuring a custom BumbleBee architecture...
sinanuozdemir/oreilly-hands-on-gpt-llm
Mastering the Art of Scalable and Efficient AI Model Deployment