huggingface/peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

93
/ 100
Verified

Supports multiple efficient adaptation techniques including LoRA, QLoRA, soft prompting, and IA3, which train only a small fraction of parameters (often <1%) while maintaining performance comparable to full fine-tuning. Integrates seamlessly with Transformers, Diffusers, and Accelerate for distributed training, quantization, and inference across diverse model architectures and tasks. Enables adapter composition and multi-task learning while producing lightweight checkpoints (typically MBs instead of GBs) that avoid catastrophic forgetting.

20,777 stars and 10,105,194 monthly downloads. Used by 82 other packages. Actively maintained with 25 commits in the last 30 days. Available on PyPI.

Maintenance 23 / 25
Adoption 25 / 25
Maturity 25 / 25
Community 20 / 25

How are scores calculated?

Stars

20,777

Forks

2,211

Language

Python

License

Apache-2.0

Last pushed

Mar 12, 2026

Monthly downloads

10,105,194

Commits (30d)

25

Dependencies

10

Reverse dependents

82

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/huggingface/peft"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.