open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
Provides generation-based evaluation across all supported models with dual assessment modes—exact matching and LLM-based answer extraction—eliminating manual data preparation across fragmented benchmark repositories. Supports distributed inference via LMDeploy and VLLM for accelerated evaluation of large-scale deployments, with specialized handling for models with reasoning/thinking modes and long-form outputs exceeding standard cell limits. Integrates with Hugging Face ecosystem (model hosting, datasets, spaces for leaderboards) and supports video benchmarks via ModelScope for comprehensive vision-language assessment.
3,894 stars. Actively maintained with 21 commits in the last 30 days.
Stars
3,894
Forks
650
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
21
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/open-compass/VLMEvalKit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Compare
Related tools
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
EuroEval/EuroEval
The robust European language model benchmark.
evalplus/evalplus
Rigourous evaluation of LLM-synthesized code - NeurIPS 2023 & COLM 2024