Sid7on1/Transformer-256dim
A powerful Transformer architecture built from scratch by Prajwal for sequence modeling tasks. This model captures complex patterns in data using multi-head self-attention, layer normalization, and feedforward networks. It’s ideal for NLP, classification, translation, and generative tasks.
No commits in the last 6 months.
Stars
1
Forks
—
Language
Python
License
MIT
Category
Last pushed
May 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Sid7on1/Transformer-256dim"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/locoformer
LocoFormer - Generalist Locomotion via Long-Context Adaptation