ycchen218/VisionQA-Llama2-OWLViT

This is a multimodal model design for the Vision Question Answering (VQA) task. It integrates the Llama2 13B, OWL-ViT, and YOLOv8 models.

11
/ 100
Experimental

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 3 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

4

Forks

Language

Python

License

Last pushed

Jun 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/ycchen218/VisionQA-Llama2-OWLViT"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.