LlamaFactory and Finetune_LLMs

Given the description of LlamaFactory as a unified and efficient fine-tuning framework for a wide range of LLMs and VLMs, and Finetune_LLMs as a repository for fine-tuning casual LLMs, these tools are **competitors**, as both offer functionality for fine-tuning large language models, but LlamaFactory presents itself as a more comprehensive and robust solution.

LlamaFactory
70
Verified
Finetune_LLMs
48
Emerging
Maintenance 23/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 22/25
Stars: 68,347
Forks: 8,346
Downloads:
Commits (30d): 24
Language: Python
License: Apache-2.0
Stars: 458
Forks: 86
Downloads:
Commits (30d): 0
Language: Python
License: AGPL-3.0
No Package No Dependents
Stale 6m No Package No Dependents

About LlamaFactory

hiyouga/LlamaFactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.

About Finetune_LLMs

mallorbc/Finetune_LLMs

Repo for fine-tuning Casual LLMs

Scores updated daily from GitHub, PyPI, and npm data. How scores work