adapters and efficient-task-transfer

The first is a comprehensive production-ready library for implementing parameter-efficient adapters across multiple model architectures, while the second is a research codebase that uses adapter-based methods to solve the upstream problem of selecting optimal intermediate tasks for pretraining—making them complementary tools where the research code could benefit from or inform usage of the adapter library.

adapters
82
Verified
efficient-task-transfer
26
Experimental
Maintenance 13/25
Adoption 22/25
Maturity 25/25
Community 22/25
Maintenance 0/25
Adoption 7/25
Maturity 9/25
Community 10/25
Stars: 2,802
Forks: 375
Downloads: 86,888
Commits (30d): 1
Language: Python
License: Apache-2.0
Stars: 37
Forks: 4
Downloads: —
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m No Package No Dependents

About adapters

adapter-hub/adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning

Integrates 10+ parameter-efficient fine-tuning methods (LoRA, prefix tuning, bottleneck adapters, etc.) into 20+ HuggingFace Transformer models via a unified API. Supports advanced composition patterns like adapter merging via task arithmetic and parallel/sequential adapter stacking, plus quantized training variants (Q-LoRA, Q-Bottleneck). Built as a drop-in extension to the Transformers library with minimal code changes needed for both training and inference.

About efficient-task-transfer

adapter-hub/efficient-task-transfer

Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021

Scores updated daily from GitHub, PyPI, and npm data. How scores work