unsloth and LLamaTuner

unsloth
81
Verified
LLamaTuner
44
Emerging
Maintenance 22/25
Adoption 15/25
Maturity 25/25
Community 19/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 18/25
Stars: 53,879
Forks: 4,503
Downloads:
Commits (30d): 453
Language: Python
License: Apache-2.0
Stars: 620
Forks: 65
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
Stale 6m No Package No Dependents

About unsloth

unslothai/unsloth

Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.

This tool helps AI engineers and researchers efficiently customize large language models (LLMs) and other AI models for specific tasks. You can input various data formats like PDFs, CSVs, and DOCX files to fine-tune models such as GPT-OSS, Llama, or Gemma. The output is a specialized AI model that performs better on your unique data, with significantly faster training and reduced memory use.

AI model training natural language processing machine learning engineering deep learning optimization AI research

About LLamaTuner

jianzhnie/LLamaTuner

Easy and Efficient Finetuning LLMs. (Supported LLama, LLama2, LLama3, Qwen, Baichuan, GLM , Falcon) 大模型高效量化训练+部署.

This toolkit helps machine learning engineers and researchers adapt large language models (LLMs) to specific tasks or datasets. It takes a base LLM and your custom data, then outputs a fine-tuned LLM capable of more specialized tasks. It's designed for anyone needing to efficiently customize state-of-the-art AI models for their unique applications, even with limited hardware resources.

large-language-models model-customization AI-training natural-language-processing machine-learning-engineering

Scores updated daily from GitHub, PyPI, and npm data. How scores work