Rohan-Thoma/Coding-attention-from-scratch
This repository consists code for executing attention mechanism from scratch for language translation models. This is coded from ground up for translating Italian to English completely without any pretraining.
No commits in the last 6 months.
Stars
2
Forks
—
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
May 04, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Rohan-Thoma/Coding-attention-from-scratch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
huggingface/transformers
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in...
kyegomez/LongNet
Implementation of plug in and play Attention from "LongNet: Scaling Transformers to 1,000,000,000 Tokens"
pbloem/former
Simple transformer implementation from scratch in pytorch. (archival, latest version on codeberg)
NVIDIA/FasterTransformer
Transformer related optimization, including BERT, GPT
kyegomez/SimplifiedTransformers
SimplifiedTransformer simplifies transformer block without affecting training. Skip connections,...