OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Supports advanced training techniques including LISA (7B models in 24GB memory), custom optimizers, speculative decoding, and Flash Attention-2, while integrating with Hugging Face models and Accelerate for distributed training. Features preset conversation templates for popular models like Llama-3 and Phi-3, plus multimodal capabilities for vision-language tasks. Built around a modular pipeline architecture that decouples data processing, model loading, and training stages for flexibility across diverse foundation model architectures.
8,489 stars. Actively maintained with 1 commit in the last 30 days.
Stars
8,489
Forks
830
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 15, 2026
Commits (30d)
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/OptimalScale/LMFlow"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
JIA-Lab-research/LongLoRA
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.
young-geng/scalax
A simple library for scaling up JAX programs