huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Supports multiple efficient adaptation techniques including LoRA, QLoRA, soft prompting, and IA3, which train only a small fraction of parameters (often <1%) while maintaining performance comparable to full fine-tuning. Integrates seamlessly with Transformers, Diffusers, and Accelerate for distributed training, quantization, and inference across diverse model architectures and tasks. Enables adapter composition and multi-task learning while producing lightweight checkpoints (typically MBs instead of GBs) that avoid catastrophic forgetting.
20,777 stars and 10,105,194 monthly downloads. Used by 82 other packages. Actively maintained with 25 commits in the last 30 days. Available on PyPI.
Stars
20,777
Forks
2,211
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Monthly downloads
10,105,194
Commits (30d)
25
Dependencies
10
Reverse dependents
82
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/huggingface/peft"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related models
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama,...
modelscope/ms-swift
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5,...
linkedin/Liger-Kernel
Efficient Triton Kernels for LLM Training
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)