kohjingyu/fromage
🧀 Code and models for the ICML 2023 paper "Grounding Language Models to Images for Multimodal Inputs and Outputs".
486 stars. No commits in the last 6 months.
Stars
486
Forks
38
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Oct 30, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/kohjingyu/fromage"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
fixie-ai/ultravox
A fast multimodal LLM for real-time voice
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.