lmms-eval and MASEval

lmms-eval provides a general-purpose multimodal evaluation framework across modalities, while MASEval specializes in evaluating multi-agent LLM systems, making them complementary tools for different evaluation scenarios rather than direct competitors.

lmms-eval
90
Verified
MASEval
59
Established
Maintenance 23/25
Adoption 20/25
Maturity 25/25
Community 22/25
Maintenance 13/25
Adoption 12/25
Maturity 18/25
Community 16/25
Stars: 3,883
Forks: 539
Downloads: 9,061
Commits (30d): 30
Language: Python
License:
Stars: 18
Forks: 7
Downloads: 222
Commits (30d): 0
Language: Python
License: MIT
No risk flags
No risk flags

About lmms-eval

EvolvingLMMs-Lab/lmms-eval

One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks

About MASEval

parameterlab/MASEval

Multi-Agent LLM Evaluation

Scores updated daily from GitHub, PyPI, and npm data. How scores work