YichenZW/llm-arch-table

Living comparison table of LLM architectural choices (norm, attention, MoE, positional encoding, and more) from the Original Transformer (2017) to frontier models (2026). Based on Harm de Vries's figure, Sebastian Raschka's Big LLM Architecture Comparison, and Tatsunori Hashimoto's Stanford CS 336 lecture.

14
/ 100
Experimental
No License No Package No Dependents
Maintenance 13 / 25
Adoption 0 / 25
Maturity 1 / 25
Community 0 / 25

How are scores calculated?

Stars

Forks

Language

License

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/YichenZW/llm-arch-table"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.