haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Combines a vision encoder (CLIP) with a lightweight projection layer to align image features with large language models, enabling end-to-end instruction tuning on image-text pairs. Supports efficient fine-tuning via LoRA, quantization (4/8-bit), and variable resolution inputs up to 4x higher pixel density in newer versions. Integrates with Hugging Face, llama.cpp, and AutoGen, with pre-trained checkpoints spanning multiple base models (LLaMA, Llama-2, Qwen, Llama-3).
24,554 stars. No commits in the last 6 months.
Stars
24,554
Forks
2,745
Language
Python
License
Apache-2.0
Category
Last pushed
Aug 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/haotian-liu/LLaVA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
DAMO-NLP-SG/Video-LLaMA
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
Instruction-Tuning-with-GPT-4/GPT-4-LLM
Instruction Tuning with GPT-4
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding