Chinese-LLaMA-Alpaca and Chinese-LLaMA-Alpaca-2
These are sequential versions of the same project line, where the second builds upon and supersedes the first by upgrading the base model from LLaMA v1 to LLaMA-2 and extending context length to 64K tokens.
About Chinese-LLaMA-Alpaca
ymcui/Chinese-LLaMA-Alpaca
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
Extends LLaMA's tokenizer with a dedicated Chinese vocabulary and continues pretraining on Chinese corpus to improve semantic understanding, while Alpaca variants are instruction-tuned for dialogue tasks. Supports seamless integration with major frameworks (transformers, llama.cpp, LangChain, text-generation-webui) and includes quantization pipelines enabling efficient inference on consumer-grade CPUs and GPUs. Provides open-source training scripts and model variants (7B/13B/33B) with specialized Plus and Pro editions optimized for response quality and length.
About Chinese-LLaMA-Alpaca-2
ymcui/Chinese-LLaMA-Alpaca-2
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work