hiyouga/LlamaFactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

70
/ 100
Verified

Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.

68,347 stars. Actively maintained with 24 commits in the last 30 days.

No Package No Dependents
Maintenance 23 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 21 / 25

How are scores calculated?

Stars

68,347

Forks

8,346

Language

Python

License

Apache-2.0

Last pushed

Mar 10, 2026

Commits (30d)

24

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hiyouga/LlamaFactory"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.