AnkitaMungalpara/Building-DeepSeek-From-Scratch
This repository shows how to build a DeepSeek language model from scratch using PyTorch. It includes clean, well-structured implementations of advanced attention techniques such as key–value caching for fast decoding, multi-query attention, grouped-query attention, and multi-head latent attention.
Stars
—
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Jan 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/AnkitaMungalpara/Building-DeepSeek-From-Scratch"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features...
kanishkamisra/minicons
Utility for behavioral and representational analyses of Language Models
lucidrains/dreamer4
Implementation of Danijar's latest iteration for his Dreamer line of work
lucidrains/simple-hierarchical-transformer
Experiments around a simple idea for inducing multiple hierarchical predictive model within a GPT
lucidrains/locoformer
LocoFormer - Generalist Locomotion via Long-Context Adaptation