adapter-hub/adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning

82
/ 100
Verified

Integrates 10+ parameter-efficient fine-tuning methods (LoRA, prefix tuning, bottleneck adapters, etc.) into 20+ HuggingFace Transformer models via a unified API. Supports advanced composition patterns like adapter merging via task arithmetic and parallel/sequential adapter stacking, plus quantized training variants (Q-LoRA, Q-Bottleneck). Built as a drop-in extension to the Transformers library with minimal code changes needed for both training and inference.

2,802 stars and 86,888 monthly downloads. Used by 2 other packages. Actively maintained with 1 commit in the last 30 days. Available on PyPI.

Maintenance 13 / 25
Adoption 22 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

2,802

Forks

375

Language

Python

License

Apache-2.0

Last pushed

Mar 01, 2026

Monthly downloads

86,888

Commits (30d)

1

Dependencies

2

Reverse dependents

2

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/adapter-hub/adapters"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.