keras-attention and attention_keras

These are **competitors** — both provide Keras implementations of attention mechanisms (Luong and Bahdanau scoring variants) for sequential models, serving the same purpose with largely overlapping functionality.

keras-attention
67
Established
attention_keras
51
Established
Maintenance 16/25
Adoption 10/25
Maturity 16/25
Community 25/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 25/25
Stars: 2,815
Forks: 659
Downloads:
Commits (30d): 1
Language: Python
License: Apache-2.0
Stars: 444
Forks: 266
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No Package No Dependents
Stale 6m No Package No Dependents

About keras-attention

philipperemy/keras-attention

Keras Attention Layer (Luong and Bahdanau scores).

Implements both multiplicative (Luong) and additive (Bahdanau) attention mechanisms as a reusable Keras layer compatible with TensorFlow 2.0+, enabling dynamic focus on sequence elements. The layer accepts 3D sequential input and outputs attention-weighted context vectors, integrating seamlessly into RNN/LSTM architectures for tasks like machine translation and document classification. Includes model serialization support and visualization capabilities for interpreting attention weights across timesteps.

About attention_keras

thushv89/attention_keras

Keras Layer implementation of Attention for Sequential models

Scores updated daily from GitHub, PyPI, and npm data. How scores work