airllm and Chinese-LLaMA-Alpaca-2
These are complements: AirLLM's memory-efficient inference technique could enable running Chinese-LLaMA-Alpaca-2 models on resource-constrained hardware, making the Chinese models more accessible despite their size.
Maintenance
16/25
Adoption
22/25
Maturity
25/25
Community
20/25
Maintenance
2/25
Adoption
10/25
Maturity
16/25
Community
19/25
Stars: 13,828
Forks: 1,368
Downloads: 14,143
Commits (30d): 1
Language: Jupyter Notebook
License: Apache-2.0
Stars: 7,163
Forks: 568
Downloads: —
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m
No Package
No Dependents
About airllm
lyogavin/airllm
AirLLM 70B inference with single 4GB GPU
About Chinese-LLaMA-Alpaca-2
ymcui/Chinese-LLaMA-Alpaca-2
中文LLaMA-2 & Alpaca-2大模型二期项目 + 64K超长上下文模型 (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work