axolotl-ai-cloud/axolotl
Go ahead and axolotl questions
Supports advanced training techniques including LoRA, QAT, and distributed parallelism (FSDP, tensor parallelism, sequence parallelism) across single and multi-GPU setups via YAML configuration. Integrates with Hugging Face Transformers and supports 50+ model architectures including Llama, Mistral, Qwen, and multimodal models, with specialized optimizations for MoE experts and long-context training.
11,429 stars. Actively maintained with 91 commits in the last 30 days. Available on PyPI.
Stars
11,429
Forks
1,268
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
91
Dependencies
57
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/axolotl-ai-cloud/axolotl"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
google/paxml
Pax is a Jax-based machine learning framework for training large scale models. Pax allows for...
JosefAlbers/PVM
Phi-3.5 for Mac: Locally-run Vision and Language Models for Apple Silicon
iamarunbrahma/finetuned-qlora-falcon7b-medical
Finetuning of Falcon-7B LLM using QLoRA on Mental Health Conversational Dataset
h2oai/h2o-wizardlm
Open-Source Implementation of WizardLM to turn documents into Q:A pairs for LLM fine-tuning
WangRongsheng/Aurora
The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse...