LlamaFactory and LLM-Finetuning
LlamaFactory is a comprehensive fine-tuning framework that abstracts away lower-level details, while LLM-Finetuning is a direct implementation using PEFT (Parameter-Efficient Fine-Tuning) as the core library—making them competitors offering different levels of abstraction for the same task.
About LlamaFactory
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.
About LLM-Finetuning
ashishpatel26/LLM-Finetuning
LLM Finetuning with peft
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work