labmlai/annotated_deep_learning_paper_implementations

๐Ÿง‘โ€๐Ÿซ 60+ Implementations/tutorials of deep learning papers with side-by-side notes ๐Ÿ“; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), ๐ŸŽฎ reinforcement learning (ppo, dqn), capsnet, distillation, ... ๐Ÿง 

56
/ 100
Established

Each implementation is built with PyTorch and rendered on an interactive website displaying code alongside paper-derived explanations in a split-pane format. Beyond core architectures, the collection spans specialized techniques like Flash Attention for transformers, LLM.int8() quantization for efficient inference, and game-theoretic algorithms (CFR for poker), with active weekly updates across diverse domains from diffusion models to graph neural networks.

65,913 stars.

No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

65,913

Forks

6,621

Language

Python

License

MIT

Last pushed

Jan 22, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/labmlai/annotated_deep_learning_paper_implementations"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.