adapters and adaptor

These are competitors offering overlapping approaches to parameter-efficient adaptation in language models, though adapter-hub/adapters is a more mature and widely-adopted unified framework while adaptor targets task-specific fine-tuning with custom objectives.

adapters
82
Verified
adaptor
42
Emerging
Maintenance 13/25
Adoption 22/25
Maturity 25/25
Community 22/25
Maintenance 0/25
Adoption 12/25
Maturity 18/25
Community 12/25
Stars: 2,802
Forks: 375
Downloads: 86,888
Commits (30d): 1
Language: Python
License: Apache-2.0
Stars: 28
Forks: 4
Downloads: 157
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No risk flags
Stale 6m

About adapters

adapter-hub/adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning

Integrates 10+ parameter-efficient fine-tuning methods (LoRA, prefix tuning, bottleneck adapters, etc.) into 20+ HuggingFace Transformer models via a unified API. Supports advanced composition patterns like adapter merging via task arithmetic and parallel/sequential adapter stacking, plus quantized training variants (Q-LoRA, Q-Bottleneck). Built as a drop-in extension to the Transformers library with minimal code changes needed for both training and inference.

About adaptor

gaussalgo/adaptor

ACL 2022: Adaptor: a library to easily adapt a language model to your own task, domain, or custom objective(s).

Scores updated daily from GitHub, PyPI, and npm data. How scores work