kyegomez/ScreenAI
Implementation of the ScreenAI model from the paper: "A Vision-Language Model for UI and Infographics Understanding"
Combines Vision Transformer patch encoding with multi-modal and LLM decoder layers to process paired image and text inputs through cross-attention and self-attention mechanisms. Configurable architecture supports variable patch sizes, embedding dimensions, and depth across distinct ViT, multi-modal encoder, and LLM decoder components. Installable via pip with straightforward PyTorch tensor-based inference.
380 stars and 45 monthly downloads. Available on PyPI.
Stars
380
Forks
36
Language
Python
License
MIT
Category
Last pushed
Feb 06, 2026
Monthly downloads
45
Commits (30d)
0
Dependencies
5
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kyegomez/ScreenAI"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
microsoft/art
Exploring the connections between artworks with deep "Visual Analogies"
codetorex/spritex
A simple tool for extracting sprites from full frames. Useful for AI projects.
Zurdo1007/visual-intelligence
Visual Intelligence is a desktop app that extracts text from images and PDFs in Turkish and...
bluet/everypixel-js
JavaScript support for EveryPixel API
IntegerMan/AutomatingMyDog
An experiment in Azure Cognitive Services