philipperemy/keras-attention

Keras Attention Layer (Luong and Bahdanau scores).

67
/ 100
Established

Implements both multiplicative (Luong) and additive (Bahdanau) attention mechanisms as a reusable Keras layer compatible with TensorFlow 2.0+, enabling dynamic focus on sequence elements. The layer accepts 3D sequential input and outputs attention-weighted context vectors, integrating seamlessly into RNN/LSTM architectures for tasks like machine translation and document classification. Includes model serialization support and visualization capabilities for interpreting attention weights across timesteps.

2,815 stars. Actively maintained with 1 commit in the last 30 days.

No Package No Dependents
Maintenance 16 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 25 / 25

How are scores calculated?

Stars

2,815

Forks

659

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Commits (30d)

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/philipperemy/keras-attention"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.