oumi and LlamaFactory
These are competitors offering overlapping functionality—both provide unified fine-tuning frameworks for multiple open-source LLMs/VLMs using LoRA/QLoRA, though LlamaFactory supports a significantly larger model zoo (100+) while Oumi emphasizes ease of deployment alongside fine-tuning.
About oumi
oumi-ai/oumi
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
About LlamaFactory
hiyouga/LlamaFactory
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Supports modular fine-tuning approaches including supervised fine-tuning, reward modeling, and reinforcement learning methods (PPO, DPO, KTO, ORPO), with optimizations like Flash Attention, quantized LoRA, and advanced optimizers (GaLore, BAdam, Muon). Provides both CLI and Gradio web interface for model training and inference, integrating with vLLM/SGLang for OpenAI-compatible API deployment.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work