inuwamobarak/MoE-LLaVA-inference
The ever-evolving landscape of artificial intelligence has presented an intersection of visual and linguistic data through large vision-language models (LVLMs). MoE-LLaVA is one of these models which stands at the forefront of revolutionizing how machines interpret and understand the world, mirroring human-like perception. However, the challenge s
No commits in the last 6 months.
Stars
4
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 02, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/inuwamobarak/MoE-LLaVA-inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
NVlabs/Eagle
Eagle: Frontier Vision-Language Models with Data-Centric Strategies