OpenGVLab/InternVL
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
Based on the README, here's a technical summary: Employs a modular architecture combining vision encoders with language models, enhanced by techniques like Variable Visual Position Encoding and Native Multimodal Pre-Training to handle diverse visual inputs at scale. The family spans 1B–241B parameters with specialized optimization methods including Mixed Preference Optimization (MPO) for preference alignment and Multimodal Test-Time Scaling for improved reasoning. Integrates with Hugging Face transformers, supports both GitHub and HF model formats, and provides open-source training pipelines (CascadeRL, data construction) alongside curated datasets like MMPR for reproducibility.
9,879 stars. No commits in the last 6 months.
Stars
9,879
Forks
764
Language
Python
License
MIT
Category
Last pushed
Sep 22, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OpenGVLab/InternVL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model