LlamaFactory and training-custom-llama

LlamaFactory is a comprehensive production-ready fine-tuning framework that would typically be chosen over a minimal training implementation, making them direct competitors for the same use case rather than complementary tools.

LlamaFactory
70
Verified
training-custom-llama
39
Emerging
Maintenance 23/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 13/25
Adoption 6/25
Maturity 9/25
Community 11/25
Stars: 68,347
Forks: 8,346
Downloads:
Commits (30d): 24
Language: Python
License: Apache-2.0
Stars: 21
Forks: 3
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No Package No Dependents
No Package No Dependents

About LlamaFactory

hiyouga/LlamaFactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.

About training-custom-llama

ruimalheiro/training-custom-llama

Llama-style transformer in PyTorch with multi-node / multi-GPU training. Includes pretraining, fine-tuning, DPO, LoRA, and knowledge distillation. Scripts for dataset mixing and training from scratch.

Scores updated daily from GitHub, PyPI, and npm data. How scores work