peft and LLM-Finetuning

PEFT is the foundational library that LLM-Finetuning uses as its core dependency, making them complements rather than competitors—the latter is a practical guide/example repository built on top of the former.

peft
93
Verified
LLM-Finetuning
45
Emerging
Maintenance 23/25
Adoption 25/25
Maturity 25/25
Community 20/25
Maintenance 2/25
Adoption 10/25
Maturity 8/25
Community 25/25
Stars: 20,777
Forks: 2,211
Downloads: 10,105,194
Commits (30d): 25
Language: Python
License: Apache-2.0
Stars: 2,827
Forks: 725
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License:
No risk flags
No License Stale 6m No Package No Dependents

About peft

huggingface/peft

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Supports multiple efficient adaptation techniques including LoRA, QLoRA, soft prompting, and IA3, which train only a small fraction of parameters (often <1%) while maintaining performance comparable to full fine-tuning. Integrates seamlessly with Transformers, Diffusers, and Accelerate for distributed training, quantization, and inference across diverse model architectures and tasks. Enables adapter composition and multi-task learning while producing lightweight checkpoints (typically MBs instead of GBs) that avoid catastrophic forgetting.

About LLM-Finetuning

ashishpatel26/LLM-Finetuning

LLM Finetuning with peft

Scores updated daily from GitHub, PyPI, and npm data. How scores work