zarzouram/image_captioning_with_transformers

Pytorch implementation of image captioning using transformer-based model.

38
/ 100
Emerging

Implements an encoder-decoder transformer architecture with per-head attention visualization capabilities, modified from PyTorch's standard multi-head attention to enable detailed attention analysis. Trained on MS COCO 2017 with beam search inference and comprehensive NLG evaluation metrics (BLEU, METEOR, GLEU). Includes preprocessing pipeline for image-caption dataset creation with HDF5 storage and Tensorboard integration for training monitoring.

No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

68

Forks

9

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zarzouram/image_captioning_with_transformers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.