NVlabs/MambaVision

[CVPR 2025] Official PyTorch Implementation of MambaVision: A Hybrid Mamba-Transformer Vision Backbone

69
/ 100
Established

Combines State Space Models (SSM) with self-attention in a hierarchical architecture, introducing a symmetric mixer block without SSM to improve global context modeling. Supports arbitrary input resolutions and provides multi-scale hierarchical features across four stages for downstream tasks like detection and segmentation. Integrates with Hugging Face and timm ecosystems, available via pip package with pretrained weights on ImageNet-1K and ImageNet-21K.

2,060 stars. Actively maintained with 4 commits in the last 30 days. Available on PyPI.

Maintenance 16 / 25
Adoption 10 / 25
Maturity 25 / 25
Community 18 / 25

How are scores calculated?

Stars

2,060

Forks

129

Language

Python

License

Last pushed

Mar 11, 2026

Commits (30d)

4

Dependencies

6

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/NVlabs/MambaVision"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.