happy-llm and happy-llm-colab

B is a complementary Colab-based implementation of A's curriculum, allowing practitioners to execute the same LLM tutorials in a free GPU environment without local setup.

happy-llm
59
Established
happy-llm-colab
50
Established
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 20/25
Maintenance 10/25
Adoption 8/25
Maturity 15/25
Community 17/25
Stars: 27,292
Forks: 2,515
Downloads:
Commits (30d): 1
Language: Jupyter Notebook
License:
Stars: 69
Forks: 12
Downloads:
Commits (30d): 0
Language: Python
License:
No Package No Dependents
No Package No Dependents

About happy-llm

datawhalechina/happy-llm

📚 从零开始的大语言模型原理与实践教程

Covers foundational NLP concepts through practical LLM implementation, with structured chapters progressing from Transformer architecture and attention mechanisms to hands-on model building using PyTorch. Includes end-to-end training workflows (pretraining, supervised fine-tuning, LoRA optimization) and applications like RAG and agent systems, with downloadable pretrained 215M parameter models and companion code implementations.

About happy-llm-colab

ningg/happy-llm-colab

happy-llm 实践练习:colab 版本、pynb 格式,GPU 免费

Converts the original happy-llm repository into executable Jupyter notebooks optimized for Google Colab, enabling GPU-accelerated LLM training and inference without local setup. Covers the complete pipeline from NLP fundamentals through Transformer architecture, pre-training, and fine-tuning techniques (LoRA/QLoRA), with hands-on implementations of models like LLaMA2 and tokenizers. Maintained on weekly cadence to sync with upstream happy-llm updates, with automated notebook generation via nbformat/nbconvert tooling.

Scores updated daily from GitHub, PyPI, and npm data. How scores work