kyegomez/ScreenAI

Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"

65
/ 100
Established

Combines Vision Transformer patch encoding with multi-modal and LLM decoder layers to process paired image and text inputs through cross-attention and self-attention mechanisms. Configurable architecture supports variable patch sizes, embedding dimensions, and depth across distinct ViT, multi-modal encoder, and LLM decoder components. Installable via pip with straightforward PyTorch tensor-based inference.

380 stars and 45 monthly downloads. Available on PyPI.

Maintenance 10 / 25
Adoption 14 / 25
Maturity 25 / 25
Community 16 / 25

How are scores calculated?

Stars

380

Forks

36

Language

Python

License

MIT

Last pushed

Feb 06, 2026

Monthly downloads

45

Commits (30d)

0

Dependencies

5

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kyegomez/ScreenAI"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.