TAU-VAILab/isbertblind

This repository is for the paper "Is BERT Blind? Exploring the Effect of Vision-and-Language Pretraining on Visual Language Understanding" (CVPR 2023)

14
/ 100
Experimental

This tool helps researchers and developers evaluate how well different AI models understand concepts like colors or shapes when presented with text. You input a sentence with a missing word (like a color or shape) and a list of possible options, and it tells you which option the model thinks is the best fit. It's designed for AI researchers and practitioners working on vision-and-language models.

No commits in the last 6 months.

Use this if you need to systematically test and compare the "visual understanding" capabilities of different large language models through masked language modeling or Stroop probing.

Not ideal if you're looking for a general-purpose natural language processing library for everyday tasks, as its focus is on specific model evaluation techniques.

AI model evaluation natural language understanding vision-language models computational linguistics machine learning research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

21

Forks

Language

Python

License

Last pushed

Nov 02, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/TAU-VAILab/isbertblind"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.