adapters and VL_adapter
The unified adapter library provides a general-purpose framework for parameter-efficient transfer learning across modalities, while the vision-language adapter is a specialized implementation of adapter techniques for a specific multimodal task domain, making them complementary tools where VL-Adapter demonstrates one application within the broader adapter-hub ecosystem.
About adapters
adapter-hub/adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Integrates 10+ parameter-efficient fine-tuning methods (LoRA, prefix tuning, bottleneck adapters, etc.) into 20+ HuggingFace Transformer models via a unified API. Supports advanced composition patterns like adapter merging via task arithmetic and parallel/sequential adapter stacking, plus quantized training variants (Q-LoRA, Q-Bottleneck). Built as a drop-in extension to the Transformers library with minimal code changes needed for both training and inference.
About VL_adapter
ylsung/VL_adapter
PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work