LLaVA and LLaVA-Mini
LLaVA-Mini is a parameter-efficient variant derived from the original LLaVA architecture, designed to achieve similar multimodal capabilities with reduced computational requirements, making them ecosystem siblings where one serves as a lightweight alternative to the other.
About LLaVA
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Combines a vision encoder (CLIP) with a lightweight projection layer to align image features with large language models, enabling end-to-end instruction tuning on image-text pairs. Supports efficient fine-tuning via LoRA, quantization (4/8-bit), and variable resolution inputs up to 4x higher pixel density in newer versions. Integrates with Hugging Face, llama.cpp, and AutoGen, with pre-trained checkpoints spanning multiple base models (LLaMA, Llama-2, Qwen, Llama-3).
About LLaVA-Mini
ictnlp/LLaVA-Mini
LLaVA-Mini is a unified large multimodal model (LMM) that can support the understanding of images, high-resolution images, and videos in an efficient manner.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work