om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
Implements GRPO (Group Relative Policy Optimization) reinforcement learning on vision-language models like Qwen2.5-VL and InternVL, demonstrating superior out-of-domain generalization compared to SFT approaches on tasks like referring expression comprehension and open-vocabulary detection. Supports flexible training configurations including full fine-tuning, LoRA, multi-node distributed training, and multi-image inputs, with customizable reward functions for different vision tasks. Provides optimized inference implementations via xllm and vllm-ascend frameworks for hardware deployment on Ascend accelerators.
5,864 stars. Actively maintained with 1 commit in the last 30 days.
Stars
5,864
Forks
377
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/om-ai-lab/VLM-R1"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
fixie-ai/ultravox
A fast multimodal LLM for real-time voice
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
ictnlp/LLaMA-Omni
LLaMA-Omni is a low-latency and high-quality end-to-end speech interaction model built upon...
deepseek-ai/Janus
Janus-Series: Unified Multimodal Understanding and Generation Models
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs