lyuchenyang/Macaw-LLM
Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
Combines CLIP for image/video encoding, Whisper for audio encoding, and LLaMA as the language backbone, with a lightweight attention-based alignment layer that bridges multi-modal embeddings to LLM token space. Introduces one-stage instruction fine-tuning and a 119K-example multi-modal instruction dataset (69K image-based, 50K video-based) generated from MS COCO, Charades, and AVSD captions using GPT-3.5-Turbo. Supports processing of all four modalities in unified inference, with minimal additional parameters compared to the base LLM.
1,593 stars. No commits in the last 6 months.
Stars
1,593
Forks
132
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 01, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/lyuchenyang/Macaw-LLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
kyegomez/PALI3
Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"
kyegomez/RT-X
Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment:...
chuanyangjin/MMToM-QA
[🏆Outstanding Paper Award at ACL 2024] MMToM-QA: Multimodal Theory of Mind Question Answering
kyegomez/PALM-E
Implementation of "PaLM-E: An Embodied Multimodal Language Model"
Muennighoff/vilio
🥶Vilio: State-of-the-art VL models in PyTorch & PaddlePaddle