OptimalScale/LMFlow

An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.

59
/ 100
Established

Supports advanced training techniques including LISA (7B models in 24GB memory), custom optimizers, speculative decoding, and Flash Attention-2, while integrating with Hugging Face models and Accelerate for distributed training. Features preset conversation templates for popular models like Llama-3 and Phi-3, plus multimodal capabilities for vision-language tasks. Built around a modular pipeline architecture that decouples data processing, model loading, and training stages for flexibility across diverse foundation model architectures.

8,489 stars. Actively maintained with 1 commit in the last 30 days.

No Package No Dependents
Maintenance 13 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

8,489

Forks

830

Language

Python

License

Apache-2.0

Category

llm-fine-tuning

Last pushed

Feb 15, 2026

Commits (30d)

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/OptimalScale/LMFlow"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.