adapters and VL_adapter

The unified adapter library provides a general-purpose framework for parameter-efficient transfer learning across modalities, while the vision-language adapter is a specialized implementation of adapter techniques for a specific multimodal task domain, making them complementary tools where VL-Adapter demonstrates one application within the broader adapter-hub ecosystem.

adapters
82
Verified
VL_adapter
39
Emerging
Maintenance 13/25
Adoption 22/25
Maturity 25/25
Community 22/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 13/25
Stars: 2,802
Forks: 375
Downloads: 86,888
Commits (30d): 1
Language: Python
License: Apache-2.0
Stars: 210
Forks: 17
Downloads:
Commits (30d): 0
Language: Python
License: MIT
No risk flags
Stale 6m No Package No Dependents

About adapters

adapter-hub/adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning

Integrates 10+ parameter-efficient fine-tuning methods (LoRA, prefix tuning, bottleneck adapters, etc.) into 20+ HuggingFace Transformer models via a unified API. Supports advanced composition patterns like adapter merging via task arithmetic and parallel/sequential adapter stacking, plus quantized training variants (Q-LoRA, Q-Bottleneck). Built as a drop-in extension to the Transformers library with minimal code changes needed for both training and inference.

About VL_adapter

ylsung/VL_adapter

PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)

Scores updated daily from GitHub, PyPI, and npm data. How scores work