Projects-at-UWM/Image-Caption-Generation
This project has the capability of generating image description and then attaching a label to it. Our model analyzes the objects in the image and generates categories that can be used as labels and further recommends captions to the user as they upload images into the application.
No commits in the last 6 months.
Stars
4
Forks
2
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jan 24, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Projects-at-UWM/Image-Caption-Generation"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ntrang086/image_captioning
generate captions for images using a CNN-RNN model that is trained on the Microsoft Common...
fregu856/CS224n_project
Neural Image Captioning in TensorFlow.
vacancy/SceneGraphParser
A python toolkit for parsing captions (in natural language) into scene graphs (as symbolic...
ltguo19/VSUA-Captioning
Code for "Aligning Linguistic Words and Visual Semantic Units for Image Captioning", ACM MM 2019
Abdelrhman-Yasser/video-content-description
Video content description model for generating descriptions for unconstrained videos