unsloth and LlamaFactory

UnSloth optimizes the computational efficiency of fine-tuning through faster training and reduced VRAM usage, while LlamaFactory provides the unified framework and model support for configuring and executing those fine-tuning jobs, making them complementary tools that work together in a fine-tuning pipeline.

unsloth
94
Verified
LlamaFactory
70
Verified
Maintenance 25/25
Adoption 25/25
Maturity 25/25
Community 19/25
Maintenance 23/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 53,879
Forks: 4,503
Downloads: 1,725,714
Commits (30d): 694
Language: Python
License: Apache-2.0
Stars: 68,347
Forks: 8,346
Downloads:
Commits (30d): 24
Language: Python
License: Apache-2.0
No risk flags
No Package No Dependents

About unsloth

unslothai/unsloth

Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.

Implements custom Triton kernels and mathematical optimizations for accelerated training across modalities (text, vision, audio, embeddings), while supporting full fine-tuning, reinforcement learning, and multi-bit quantization (4-bit, 16-bit, FP8). Provides both Unsloth Studio (web UI for Windows/Linux/macOS with visual data recipe workflows) and Unsloth Core (Python library integrating with PyTorch/Hugging Face), with built-in support for 500+ models and export to GGUF/safetensors formats.

About LlamaFactory

hiyouga/LlamaFactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.

Scores updated daily from GitHub, PyPI, and npm data. How scores work