yunncheng/MMRL
[CVPR 2025 & IJCV2026] Official PyTorch Code for "MMRL: Multi-Modal Representation Learning for Vision-Language Models" and its extension "MMRL++: Parameter-Efficient and Interaction-Aware Representation Learning for Vision-Language Models".
102 stars.
Stars
102
Forks
3
Language
Python
License
MIT
Last pushed
Apr 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/multimodal/yunncheng/MMRL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
starVLA/starVLA
StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing
vortex-data/vortex
An extensible, state-of-the-art framework for columnar compression, and the fastest FOSS...
motis-project/motis
multimodal routing, geocoding, and map tiles
zai-org/GLM-V
GLM-4.6V/4.5V/4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
neka-nat/cad3dify
2D to 3D CAD Conversion Using VLM