THUDM/P-tuning-v2

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks

39
/ 100
Emerging

Implements deep prompt tuning by injecting continuous learnable prompts at every transformer layer rather than just the input, significantly reducing trainable parameters while maintaining fine-tuning-level performance. Supports diverse NLP tasks including text classification (SuperGLUE), sequence tagging (NER, SRL), and reading comprehension (SQuAD) across BERT and RoBERTa models. Compatible with Hugging Face Datasets API for streamlined data loading and frozen backbone architectures for parameter-efficient adaptation.

2,077 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 9 / 25
Community 20 / 25

How are scores calculated?

Stars

2,077

Forks

207

Language

Python

License

Apache-2.0

Last pushed

Nov 16, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/THUDM/P-tuning-v2"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.