Mogalina/transformer
Minimal Transformer implementation in pure C based on the architecture from the "Attention Is All You Need" paper.
Stars
—
Forks
—
Language
C
License
MIT
Category
Last pushed
Mar 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Mogalina/transformer"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
philipperemy/keras-attention
Keras Attention Layer (Luong and Bahdanau scores).
tatp22/linformer-pytorch
My take on a practical implementation of Linformer for Pytorch.
thushv89/attention_keras
Keras Layer implementation of Attention for Sequential models
datalogue/keras-attention
Visualizing RNNs using the attention mechanism
ematvey/hierarchical-attention-networks
Document classification with Hierarchical Attention Networks in TensorFlow. WARNING: project is...