unsloth and LlamaFactory
UnSloth optimizes the computational efficiency of fine-tuning through faster training and reduced VRAM usage, while LlamaFactory provides the unified framework and model support for configuring and executing those fine-tuning jobs, making them complementary tools that work together in a fine-tuning pipeline.
About unsloth
unslothai/unsloth
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.
Implements custom Triton kernels and mathematical optimizations for accelerated training across modalities (text, vision, audio, embeddings), while supporting full fine-tuning, reinforcement learning, and multi-bit quantization (4-bit, 16-bit, FP8). Provides both Unsloth Studio (web UI for Windows/Linux/macOS with visual data recipe workflows) and Unsloth Core (Python library integrating with PyTorch/Hugging Face), with built-in support for 500+ models and export to GGUF/safetensors formats.
About LlamaFactory
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work