THUDM/P-tuning
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Introduces prompt tuning via learnable continuous embeddings inserted into the input layer, enabling efficient fine-tuning with minimal parameters compared to full model adaptation. Demonstrates effectiveness on knowledge probing (LAMA) and few-shot NLU tasks (SuperGLUE), with v2 extending the approach to deeper prompt positions. Compatible with large models like GLM-130B, supporting practical deployment on consumer GPUs while maintaining competitive performance against full fine-tuning.
938 stars. No commits in the last 6 months.
Stars
938
Forks
114
Language
Python
License
MIT
Category
Last pushed
Oct 06, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/THUDM/P-tuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
debjitpaul/refiner
About The corresponding code from our paper " REFINER: Reasoning Feedback on Intermediate...
ZixuanKe/PyContinual
PyContinual (An Easy and Extendible Framework for Continual Learning)
arazd/ProgressivePrompts
Progressive Prompts: Continual Learning for Language Models
zjunlp/ContinueMKGC
[IJCAI 2024] Continual Multimodal Knowledge Graph Construction
SALT-NLP/IDBR
Codes for the paper: "Continual Learning for Text Classification with Information...