adapters and efficient-task-transfer
The first is a comprehensive production-ready library for implementing parameter-efficient adapters across multiple model architectures, while the second is a research codebase that uses adapter-based methods to solve the upstream problem of selecting optimal intermediate tasks for pretraining—making them complementary tools where the research code could benefit from or inform usage of the adapter library.
About adapters
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Integrates 10+ parameter-efficient fine-tuning methods (LoRA, prefix tuning, bottleneck adapters, etc.) into 20+ HuggingFace Transformer models via a unified API. Supports advanced composition patterns like adapter merging via task arithmetic and parallel/sequential adapter stacking, plus quantized training variants (Q-LoRA, Q-Bottleneck). Built as a drop-in extension to the Transformers library with minimal code changes needed for both training and inference.
About efficient-task-transfer
adapter-hub/efficient-task-transfer
Research code for "What to Pre-Train on? Efficient Intermediate Task Selection", EMNLP 2021
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work