eltoto1219/vltk
A toolkit for vision-language processing to support the increasing popularity of mulit-modal transformer-based models
No commits in the last 6 months.
Stars
1
Forks
1
Language
HTML
License
Apache-2.0
Category
Last pushed
Oct 30, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eltoto1219/vltk"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
kyegomez/PALM-E
Implementation of "PaLM-E: An Embodied Multimodal Language Model"
lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration