datawhalechina/self-llm

《开源大模型食用指南》针对中国宝宝量身打造的基于Linux环境快速微调(全参数/Lora)、部署国内外开源大模型(LLM)/多模态大模型(MLLM)教程

62
/ 100
Established

Provides comprehensive tutorials covering 50+ open-source LLMs with model-specific environment configurations, deployment strategies (CLI, web demos, LangChain integration), and multiple fine-tuning approaches including distributed full-parameter training, LoRA, and P-tuning. Targets resource-constrained users by emphasizing local deployment and private domain adaptation without API dependencies, supporting both NVIDIA GPUs and specialized accelerators like AMD and Huawei Ascend NPUs.

28,927 stars. Actively maintained with 3 commits in the last 30 days.

No Package No Dependents
Maintenance 16 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

28,927

Forks

2,857

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Mar 08, 2026

Commits (30d)

3

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/datawhalechina/self-llm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.