ChatGLM-Tuning and ChatGLM-Finetuning
These are competitors offering overlapping LoRA-based fine-tuning solutions for ChatGLM models, with B providing broader method coverage (Freeze, P-tuning, full parameter tuning) compared to A's LoRA-focused approach.
Maintenance
0/25
Adoption
10/25
Maturity
16/25
Community
21/25
Maintenance
0/25
Adoption
10/25
Maturity
8/25
Community
21/25
Stars: 3,758
Forks: 440
Downloads: —
Commits (30d): 0
Language: Python
License: MIT
Stars: 2,782
Forks: 312
Downloads: —
Commits (30d): 0
Language: Python
License: —
Stale 6m
No Package
No Dependents
No License
Stale 6m
No Package
No Dependents
About ChatGLM-Tuning
mymusise/ChatGLM-Tuning
基于ChatGLM-6B + LoRA的Fintune方案
About ChatGLM-Finetuning
liucongg/ChatGLM-Finetuning
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
Scores updated daily from GitHub, PyPI, and npm data. How scores work