LLaVA and llama-multimodal-vqa

LLaVA is a foundational vision-language instruction-tuning framework that llama-multimodal-vqa builds upon by adapting its techniques specifically for Llama 3 architecture and VQA tasks.

LLaVA
47
Emerging
llama-multimodal-vqa
41
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 8/25
Maturity 16/25
Community 17/25
Stars: 24,554
Forks: 2,745
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 51
Forks: 11
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About LLaVA

haotian-liu/LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Combines a vision encoder (CLIP) with a lightweight projection layer to align image features with large language models, enabling end-to-end instruction tuning on image-text pairs. Supports efficient fine-tuning via LoRA, quantization (4/8-bit), and variable resolution inputs up to 4x higher pixel density in newer versions. Integrates with Hugging Face, llama.cpp, and AutoGen, with pre-trained checkpoints spanning multiple base models (LLaMA, Llama-2, Qwen, Llama-3).

About llama-multimodal-vqa

AdrianBZG/llama-multimodal-vqa

Multimodal Instruction Tuning for Llama 3

Scores updated daily from GitHub, PyPI, and npm data. How scores work