LlamaFactory and FineTuningLLMs
Maintenance
23/25
Adoption
10/25
Maturity
16/25
Community
21/25
Maintenance
10/25
Adoption
10/25
Maturity
16/25
Community
21/25
Stars: 68,347
Forks: 8,346
Downloads: —
Commits (30d): 24
Language: Python
License: Apache-2.0
Stars: 786
Forks: 103
Downloads: —
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No Package
No Dependents
No Package
No Dependents
About LlamaFactory
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.
About FineTuningLLMs
dvgodoy/FineTuningLLMs
Official repository of my book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face"
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work