vixhal-baraiya/microgpt-c
The most atomic way to train and inference a GPT in pure, dependency-free C
Implements transformer architecture with multi-head attention and feedforward layers entirely in C, requiring only the standard math library. Supports both training from scratch on raw text and inference sampling, with compilation flags leveraging CPU-native vectorization (AVX) and fast-math optimizations for performance. No external ML frameworks or dependencies—the entire forward/backward pass and tokenization pipeline is self-contained.
234 stars.
Stars
234
Forks
44
Language
C
License
MIT
Category
Last pushed
Feb 15, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/vixhal-baraiya/microgpt-c"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
milanm/AutoGrad-Engine
A complete GPT language model (training and inference) in ~600 lines of pure C#, zero dependencies
LeeSinLiang/microGPT
Implementation of GPT from scratch. Design to be lightweight and easy to modify.
dubzdubz/microgpt-ts
A complete GPT built from scratch in TypeScript with zero dependencies
ssrhaso/microjpt
The most atomic way to train and run inference for a GPT in 100 lines of pure, dependency-free Julia.
biegehydra/NanoGptDotnet
A miniature large language model (LLM) that generates shakespeare like text written in C#....