LLaVA and LLaVA-Mini

LLaVA-Mini is a parameter-efficient variant derived from the original LLaVA architecture, designed to achieve similar multimodal capabilities with reduced computational requirements, making them ecosystem siblings where one serves as a lightweight alternative to the other.

LLaVA
47
Emerging
LLaVA-Mini
41
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 13/25
Stars: 24,554
Forks: 2,745
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 562
Forks: 30
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About LLaVA

haotian-liu/LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Combines a vision encoder (CLIP) with a lightweight projection layer to align image features with large language models, enabling end-to-end instruction tuning on image-text pairs. Supports efficient fine-tuning via LoRA, quantization (4/8-bit), and variable resolution inputs up to 4x higher pixel density in newer versions. Integrates with Hugging Face, llama.cpp, and AutoGen, with pre-trained checkpoints spanning multiple base models (LLaMA, Llama-2, Qwen, Llama-3).

About LLaVA-Mini

ictnlp/LLaVA-Mini

LLaVA-Mini is a unified large multimodal model (LMM) that can support the understanding of images, high-resolution images, and videos in an efficient manner.

Scores updated daily from GitHub, PyPI, and npm data. How scores work