happy-llm and happy-llm-colab
B is a complementary Colab-based implementation of A's curriculum, allowing practitioners to execute the same LLM tutorials in a free GPU environment without local setup.
About happy-llm
datawhalechina/happy-llm
📚 从零开始的大语言模型原理与实践教程
Covers foundational NLP concepts through practical LLM implementation, with structured chapters progressing from Transformer architecture and attention mechanisms to hands-on model building using PyTorch. Includes end-to-end training workflows (pretraining, supervised fine-tuning, LoRA optimization) and applications like RAG and agent systems, with downloadable pretrained 215M parameter models and companion code implementations.
About happy-llm-colab
ningg/happy-llm-colab
happy-llm 实践练习:colab 版本、pynb 格式,GPU 免费
Converts the original happy-llm repository into executable Jupyter notebooks optimized for Google Colab, enabling GPU-accelerated LLM training and inference without local setup. Covers the complete pipeline from NLP fundamentals through Transformer architecture, pre-training, and fine-tuning techniques (LoRA/QLoRA), with hands-on implementations of models like LLaMA2 and tokenizers. Maintained on weekly cadence to sync with upstream happy-llm updates, with automated notebook generation via nbformat/nbconvert tooling.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work