kyegomez/VisionLLaMA
Implementation of VisionLLaMA from the paper: "VisionLLaMA: A Unified LLaMA Interface for Vision Tasks" in PyTorch and Zeta
No commits in the last 6 months.
Stars
16
Forks
—
Language
Python
License
MIT
Category
Last pushed
Nov 11, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/kyegomez/VisionLLaMA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
friedrichor/Awesome-Multimodal-Papers
A curated list of awesome Multimodal studies.
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
HuaizhengZhang/Awsome-Deep-Learning-for-Video-Analysis
Papers, code and datasets about deep learning and multi-modal learning for video analysis