claws-lab/MMSoc

We introduce MM-Soc, a comprehensive benchmark designed to evaluate MLLMs' understanding of multimodal social media content.

12
/ 100
Experimental

This project provides a benchmark for evaluating how well AI models understand social media content that combines images and text. It takes in collections of memes, YouTube videos, and news posts, and assesses an AI's ability to perform tasks like detecting humor, identifying hate speech, categorizing video topics, or spotting misinformation. This is useful for AI researchers and developers working on advanced AI models for social media analysis.

No commits in the last 6 months.

Use this if you are developing or evaluating multimodal AI models and need a standardized way to test their performance on diverse social media understanding tasks.

Not ideal if you are looking for an off-the-shelf tool to directly analyze your own social media data without developing AI models.

AI model benchmarking social media analysis hate speech detection misinformation detection multimodal AI
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

Aug 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/claws-lab/MMSoc"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.