peft and LLM-Finetuning
PEFT is the foundational library that LLM-Finetuning uses as its core dependency, making them complements rather than competitors—the latter is a practical guide/example repository built on top of the former.
About peft
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Supports multiple efficient adaptation techniques including LoRA, QLoRA, soft prompting, and IA3, which train only a small fraction of parameters (often <1%) while maintaining performance comparable to full fine-tuning. Integrates seamlessly with Transformers, Diffusers, and Accelerate for distributed training, quantization, and inference across diverse model architectures and tasks. Enables adapter composition and multi-task learning while producing lightweight checkpoints (typically MBs instead of GBs) that avoid catastrophic forgetting.
About LLM-Finetuning
ashishpatel26/LLM-Finetuning
LLM Finetuning with peft
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work