LlamaFactory and LLM-Finetuning

LlamaFactory is a comprehensive fine-tuning framework that abstracts away lower-level details, while LLM-Finetuning is a direct implementation using PEFT (Parameter-Efficient Fine-Tuning) as the core library—making them competitors offering different levels of abstraction for the same task.

LlamaFactory
70
Verified
LLM-Finetuning
45
Emerging
Maintenance 23/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 2/25
Adoption 10/25
Maturity 8/25
Community 25/25
Stars: 68,347
Forks: 8,346
Downloads:
Commits (30d): 24
Language: Python
License: Apache-2.0
Stars: 2,827
Forks: 725
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License:
No Package No Dependents
No License Stale 6m No Package No Dependents

About LlamaFactory

hiyouga/LlamaFactory

Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)

Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.

About LLM-Finetuning

ashishpatel26/LLM-Finetuning

LLM Finetuning with peft

Scores updated daily from GitHub, PyPI, and npm data. How scores work