LLMForEverybody and happy-llm
These are complements—one provides conversational interview preparation and intuition-building for LLM concepts, while the other offers a structured zero-to-hero tutorial for learning LLM principles and implementation, making them natural paired resources for different learning stages.
About LLMForEverybody
luhengshiwo/LLMForEverybody
每个人都能看懂的大模型知识分享,LLMs春/秋招大模型面试前必看,让你和面试官侃侃而谈
Provides structured paper-by-paper analysis tracing Transformer's evolution from foundational architectures (Transformer, BERT, GPT series) through multimodal (CLIP, LLaVA) and recent efficient variants (LLaMA, Mistral, Mixtral), with accompanying video walkthroughs and curated interview questions covering core concepts like self-attention, instruction tuning, and MoE routing. The platform integrates Bilibili and YouTube video tutorials alongside paper summaries, enabling learners to progressively build mental models of LLM development trajectories across language, vision, and code domains.
About happy-llm
datawhalechina/happy-llm
📚 从零开始的大语言模型原理与实践教程
Covers foundational NLP concepts through practical LLM implementation, with structured chapters progressing from Transformer architecture and attention mechanisms to hands-on model building using PyTorch. Includes end-to-end training workflows (pretraining, supervised fine-tuning, LoRA optimization) and applications like RAG and agent systems, with downloadable pretrained 215M parameter models and companion code implementations.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work