adapters and awesome-adapter-resources
The first is a production-ready PyTorch library implementing multiple parameter-efficient adapter architectures (LoRA, prefix tuning, etc.), while the second is a curated reference collection documenting adapter methods and research—making them complementary resources where practitioners use the library while researchers consult the overview.
About adapters
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Integrates 10+ parameter-efficient fine-tuning methods (LoRA, prefix tuning, bottleneck adapters, etc.) into 20+ HuggingFace Transformer models via a unified API. Supports advanced composition patterns like adapter merging via task arithmetic and parallel/sequential adapter stacking, plus quantized training variants (Q-LoRA, Q-Bottleneck). Built as a drop-in extension to the Transformers library with minimal code changes needed for both training and inference.
About awesome-adapter-resources
calpt/awesome-adapter-resources
Collection of Tools and Papers related to Adapters / Parameter-Efficient Transfer Learning/ Fine-Tuning
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work