airllm and Chinese-LLaMA-Alpaca

These are complements: AirLLM provides memory-efficient inference techniques that could optimize the deployment of Chinese-LLaMA-Alpaca models on resource-constrained hardware, while Chinese-LLaMA-Alpaca provides Chinese-adapted model weights and training procedures that AirLLM's quantization and offloading methods could enhance.

airllm
83
Verified
Chinese-LLaMA-Alpaca
48
Emerging
Maintenance 16/25
Adoption 22/25
Maturity 25/25
Community 20/25
Maintenance 2/25
Adoption 10/25
Maturity 16/25
Community 20/25
Stars: 13,828
Forks: 1,368
Downloads: 14,143
Commits (30d): 1
Language: Jupyter Notebook
License: Apache-2.0
Stars: 18,970
Forks: 1,868
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m No Package No Dependents

About airllm

lyogavin/airllm

AirLLM 70B inference with single 4GB GPU

About Chinese-LLaMA-Alpaca

ymcui/Chinese-LLaMA-Alpaca

中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)

Extends LLaMA's tokenizer with a dedicated Chinese vocabulary and continues pretraining on Chinese corpus to improve semantic understanding, while Alpaca variants are instruction-tuned for dialogue tasks. Supports seamless integration with major frameworks (transformers, llama.cpp, LangChain, text-generation-webui) and includes quantization pipelines enabling efficient inference on consumer-grade CPUs and GPUs. Provides open-source training scripts and model variants (7B/13B/33B) with specialized Plus and Pro editions optimized for response quality and length.

Scores updated daily from GitHub, PyPI, and npm data. How scores work