vtu81/NaiveVQA
A Visual Question Answering model implemented in MindSpore and PyTorch. The model is a reimplementation of the paper *Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering*. It's our final project for course DL4NLP at ZJU.
No commits in the last 6 months.
Stars
10
Forks
4
Language
Jupyter Notebook
License
—
Category
Last pushed
Jul 27, 2021
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vtu81/NaiveVQA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
facebookresearch/mmf
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR)
open-mmlab/mmpretrain
OpenMMLab Pre-training Toolbox and Benchmark
friedrichor/Awesome-Multimodal-Papers
A curated list of awesome Multimodal studies.
adambielski/siamese-triplet
Siamese and triplet networks with online pair/triplet mining in PyTorch
KaiyangZhou/pytorch-vsumm-reinforce
Unsupervised video summarization with deep reinforcement learning (AAAI'18)