Chinese-LLaMA-Alpaca and Chinese-LLaMA-Alpaca-2

These are sequential versions of the same project line, where the second builds upon and supersedes the first by upgrading the base model from LLaMA v1 to LLaMA-2 and extending context length to 64K tokens.

Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 20/25
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 19/25
Stars: 18,970
Forks: 1,868
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 7,163
Forks: 568
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About Chinese-LLaMA-Alpaca

ymcui/Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

Extends LLaMA's tokenizer with a dedicated Chinese vocabulary and continues pretraining on Chinese corpus to improve semantic understanding, while Alpaca variants are instruction-tuned for dialogue tasks. Supports seamless integration with major frameworks (transformers, llama.cpp, LangChain, text-generation-webui) and includes quantization pipelines enabling efficient inference on consumer-grade CPUs and GPUs. Provides open-source training scripts and model variants (7B/13B/33B) with specialized Plus and Pro editions optimized for response quality and length.

About Chinese-LLaMA-Alpaca-2

ymcui/Chinese-LLaMA-Alpaca-2

中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)

Scores updated daily from GitHub, PyPI, and npm data. How scores work