ltguo19/VSUA-Captioning

Code for "Aligning Linguistic Words and Visual Semantic Units for Image Captioning", ACM MM 2019

41
/ 100
Emerging

Constructs image representations as structured graphs with Visual Semantic Units (objects, attributes, relationships) extracted from scene graphs and bottom-up attention features, then aligns these units with caption words during generation. Implements dual training stages: cross-entropy pretraining followed by reinforcement learning optimization using CIDEr rewards. Built on PyTorch with integrated geometry and semantic relationship graphs, leveraging pre-extracted bottom-up features and scene graph annotations from COCO dataset.

258 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

258

Forks

24

Language

Python

License

MIT

Category

image-captioning

Last pushed

Oct 18, 2019

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/ltguo19/VSUA-Captioning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.