THUDM/P-tuning

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

46
/ 100
Emerging

Introduces prompt tuning via learnable continuous embeddings inserted into the input layer, enabling efficient fine-tuning with minimal parameters compared to full model adaptation. Demonstrates effectiveness on knowledge probing (LAMA) and few-shot NLU tasks (SuperGLUE), with v2 extending the approach to deeper prompt positions. Compatible with large models like GLM-130B, supporting practical deployment on consumer GPUs while maintaining competitive performance against full fine-tuning.

938 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

938

Forks

114

Language

Python

License

MIT

Last pushed

Oct 06, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/THUDM/P-tuning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.