LLaVA and ViP-LLaVA

ViP-LLaVA builds upon LLaVA's architecture by extending its visual instruction tuning approach to handle arbitrary visual prompts (like spatial markers and annotations) rather than just image-text pairs, making them complementary advances in the same multimodal instruction-tuning lineage.

LLaVA
47
Emerging
ViP-LLaVA
38
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 12/25
Stars: 24,554
Forks: 2,745
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 336
Forks: 21
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About LLaVA

haotian-liu/LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Combines a vision encoder (CLIP) with a lightweight projection layer to align image features with large language models, enabling end-to-end instruction tuning on image-text pairs. Supports efficient fine-tuning via LoRA, quantization (4/8-bit), and variable resolution inputs up to 4x higher pixel density in newer versions. Integrates with Hugging Face, llama.cpp, and AutoGen, with pre-trained checkpoints spanning multiple base models (LLaMA, Llama-2, Qwen, Llama-3).

About ViP-LLaVA

WisconsinAIVision/ViP-LLaVA

[CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts

Scores updated daily from GitHub, PyPI, and npm data. How scores work