THUDM/P-tuning-v2
An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks
Implements deep prompt tuning by injecting continuous learnable prompts at every transformer layer rather than just the input, significantly reducing trainable parameters while maintaining fine-tuning-level performance. Supports diverse NLP tasks including text classification (SuperGLUE), sequence tagging (NER, SRL), and reading comprehension (SQuAD) across BERT and RoBERTa models. Compatible with Hugging Face Datasets API for streamlined data loading and frozen backbone architectures for parameter-efficient adaptation.
2,077 stars. No commits in the last 6 months.
Stars
2,077
Forks
207
Language
Python
License
Apache-2.0
Category
Last pushed
Nov 16, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/THUDM/P-tuning-v2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
ucinlp/autoprompt
AutoPrompt: Automatic Prompt Construction for Masked Language Models.
zjunlp/KnowPrompt
[WWW 2022] KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation...
zjunlp/PromptKG
PromptKG Family: a Gallery of Prompt Learning & KG-related research works, toolkits, and paper-list.
VE-FORBRYDERNE/mtj-softtuner
Create soft prompts for fairseq 13B dense, GPT-J-6B and GPT-Neo-2.7B for free in a Google Colab...
princeton-nlp/OptiPrompt
[NAACL 2021] Factual Probing Is [MASK]: Learning vs. Learning to Recall https://arxiv.org/abs/2104.05240