Allen0307/AdapterBias

Code for the Findings of NAACL 2022(Long Paper): AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

14
/ 100
Experimental

This project helps machine learning engineers and researchers fine-tune large language models for specific NLP tasks more efficiently. By introducing a small, token-dependent adjustment, it allows pre-trained models to adapt to new datasets with significantly fewer trainable parameters. You provide a pre-trained language model and a dataset for a specific NLP task, and the output is a fine-tuned model ready for that task.

No commits in the last 6 months.

Use this if you are working with large pre-trained language models and need to adapt them to various downstream NLP tasks (like sentiment analysis, question answering, or text entailment) while minimizing computational resources and memory.

Not ideal if you are not working with Transformer-based language models or if you prioritize maximum model performance over parameter efficiency.

natural-language-processing large-language-models model-adaptation transfer-learning resource-efficient-ml
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

18

Forks

Language

Python

License

Last pushed

May 04, 2022

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Allen0307/AdapterBias"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.