lyuchenyang/Macaw-LLM

Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration

45
/ 100
Emerging

Combines CLIP for image/video encoding, Whisper for audio encoding, and LLaMA as the language backbone, with a lightweight attention-based alignment layer that bridges multi-modal embeddings to LLM token space. Introduces one-stage instruction fine-tuning and a 119K-example multi-modal instruction dataset (69K image-based, 50K video-based) generated from MS COCO, Charades, and AVSD captions using GPT-3.5-Turbo. Supports processing of all four modalities in unified inference, with minimal additional parameters compared to the base LLM.

1,593 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

1,593

Forks

132

Language

Python

License

Apache-2.0

Last pushed

Jan 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/lyuchenyang/Macaw-LLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.