meiyor/DeepGaze-Text-Embedding-Map

DeepGaze + Text-Embedding-Map project developed in Cardiff University - Christoph Teufel Lab

12
/ 100
Experimental

This project helps researchers and scientists better predict where people will look in an image by incorporating semantic information about the objects and scenes. It takes in images and human gaze fixation data, along with text descriptions of objects and scenes, to produce more accurate predictions of visual attention patterns. This tool is for cognitive scientists, vision researchers, and anyone studying human visual perception.

No commits in the last 6 months.

Use this if you need to create more robust and semantically-aware models for predicting human eye movements on images.

Not ideal if you are looking for a pre-trained, plug-and-play solution for general image saliency prediction without needing to train custom models or integrate detailed semantic information.

cognitive-science vision-research human-gaze-prediction saliency-mapping visual-attention
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

Last pushed

May 31, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/meiyor/DeepGaze-Text-Embedding-Map"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.