VectorInstitute/VLDBench
VLDBench: A large-scale benchmark for evaluating Vision-Language Models (VLMs) and Large Language Models (LLMs) on multimodal disinformation detection.
This project helps evaluate how well AI models can detect disinformation in news, using both text and images. It takes news articles paired with images and determines if the content is misleading. Content moderators, policy analysts, and researchers focused on AI safety would use this to understand and improve AI's ability to combat false information.
Use this if you need a comprehensive, human-verified dataset and a standardized method to benchmark AI models for multimodal disinformation detection.
Not ideal if you are looking for an out-of-the-box disinformation detection tool for direct, real-time content moderation.
Stars
8
Forks
1
Language
Python
License
—
Category
Last pushed
Jan 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/VectorInstitute/VLDBench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle