Llm Fine Tuning Transformer Models
There are 77 llm fine tuning models tracked. 2 score above 50 (established tier). The highest-rated is OptimalScale/LMFlow at 59/100 with 8,489 stars. 1 of the top 10 are actively maintained.
Get all 77 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=transformers&subcategory=llm-fine-tuning&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Model | Score | Tier |
|---|---|---|---|
| 1 |
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation... |
|
Established |
| 2 |
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time |
|
Established |
| 3 |
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX |
|
Emerging |
| 4 |
JIA-Lab-research/LongLoRA
Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral) |
|
Emerging |
| 5 |
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free. |
|
Emerging |
| 6 |
young-geng/scalax
A simple library for scaling up JAX programs |
|
Emerging |
| 7 |
MaximeRobeyns/bayesian_lora
Bayesian Low-Rank Adaptation for Large Language Models |
|
Emerging |
| 8 |
georgian-io/LLM-Finetuning-Toolkit
Toolkit for fine-tuning, ablating and unit-testing open-source LLMs. |
|
Emerging |
| 9 |
NVlabs/EoRA
[ICLRW'26] EoRA: Fine-tuning-free Compensation for Compressed LLM with... |
|
Emerging |
| 10 |
SakanaAI/text-to-lora
Hypernetworks that adapt LLMs for specific benchmark tasks using only... |
|
Emerging |
| 11 |
kyegomez/Finetuning-Suite
Finetune any model on HF in less than 30 seconds |
|
Emerging |
| 12 |
ZinYY/TreeLoRA
A pytorch implementation of the paper "TreeLoRA: Efficient Continual... |
|
Emerging |
| 13 |
di37/finetuning-quantize-evaluate
Fine-Tune, Quantize, Evaluate: The Complete Guide — LLMs, VLMs, and Embedding Models |
|
Emerging |
| 14 |
rohan-paul/LLM-FineTuning-Large-Language-Models
LLM (Large Language Model) FineTuning |
|
Emerging |
| 15 |
GiovanniGatti/socratic-llm
Training pipeline for fine tuning Phi-3-mini-instruct to follow the Socratic method |
|
Emerging |
| 16 |
A-baoYang/alpaca-7b-chinese
Finetune LLaMA-7B with Chinese instruction datasets |
|
Emerging |
| 17 |
SensAI-PT/LLaMa2lang
Convenience scripts to finetune (chat-)LLaMa3 and other models for any language |
|
Emerging |
| 18 |
NVlabs/DoRA
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed... |
|
Emerging |
| 19 |
bigscience-workshop/xmtf
Crosslingual Generalization through Multitask Finetuning |
|
Emerging |
| 20 |
sandy1990418/Finetune-Qwen2.5-VL
Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision... |
|
Emerging |
| 21 |
VectorInstitute/vectorlm
LLM finetuning in resource-constrained environments. |
|
Emerging |
| 22 |
punica-ai/punica
Serving multiple LoRA finetuned LLM as one |
|
Emerging |
| 23 |
liuqidong07/MOELoRA-peft
[SIGIR'24] The official implementation code of MOELoRA. |
|
Emerging |
| 24 |
architkaila/Fine-Tuning-LLMs-for-Medical-Entity-Extraction
Exploring the potential of fine-tuning Large Language Models (LLMs) like... |
|
Emerging |
| 25 |
AlexandrosChrtn/llama-fine-tune-guide
Fine-tune the newly released Llama-3.2 lightweight models. |
|
Emerging |
| 26 |
rasbt/blog-finetuning-llama-adapters
Supplementary material for "Understanding Parameter-Efficient Finetuning of... |
|
Emerging |
| 27 |
molbal/llm-text-completion-finetune
Guide on text completion large language model fine-tuning, including example... |
|
Emerging |
| 28 |
Yog-Sotho/LLM-fine-tuner
Powerful no-code LLM fine-tuner: upload data → train → deploy in minutes.... |
|
Emerging |
| 29 |
neuralwork/instruct-finetune-mistral
Fine-tune Mistral 7B to generate fashion style suggestions |
|
Emerging |
| 30 |
rasbt/dora-from-scratch
LoRA and DoRA from Scratch Implementations |
|
Emerging |
| 31 |
EricLBuehler/xlora
X-LoRA: Mixture of LoRA Experts |
|
Emerging |
| 32 |
anchen1011/FireAct
FireAct: Toward Language Agent Fine-tuning |
|
Emerging |
| 33 |
ymoslem/Adaptive-MT-LLM-Fine-tuning
Fine-tuning Open-Source LLMs for Adaptive Machine Translation |
|
Emerging |
| 34 |
TrelisResearch/install-guides
Various installation guides for Large Language Models |
|
Emerging |
| 35 |
promptslab/LLMtuner
FineTune LLMs in few lines of code (Text2Text, Text2Speech, Speech2Text) |
|
Emerging |
| 36 |
ksm26/Finetuning-Large-Language-Models
Unlock the potential of finetuning Large Language Models (LLMs). Learn from... |
|
Emerging |
| 37 |
openmedlab/PULSE
PULSE: Pretrained and Unified Language Service Engine |
|
Emerging |
| 38 |
poloclub/Fine-tuning-LLMs
Finetune Llama 2 on Colab for free on your own data: step-by-step tutorial |
|
Emerging |
| 39 |
aws-samples/fine-tuning-llm-with-domain-knowledge
This repo walks you through how to use transfer learning to fine tune a LLM... |
|
Emerging |
| 40 |
zjohn77/lightning-mlflow-hf
Use QLoRA to tune LLM in PyTorch-Lightning w/ Huggingface + MLflow |
|
Emerging |
| 41 |
researchim-ai/models-at-home
training models at home |
|
Experimental |
| 42 |
ngoanpv/llama2_vietnamese
A fine-tuned Large Language Model (LLM) for the Vietnamese language based on... |
|
Experimental |
| 43 |
Pengxin-Guo/FedSA-LoRA
Selective Aggregation for Low-Rank Adaptation in Federated Learning [ICLR 2025] |
|
Experimental |
| 44 |
rasbt/gradient-accumulation-blog
Finetuning BLOOM on a single GPU using gradient-accumulation |
|
Experimental |
| 45 |
eliahuhorwitz/Spectral-DeTuning
Official PyTorch Implementation for the "Recovering the Pre-Fine-Tuning... |
|
Experimental |
| 46 |
CristianCristanchoT/chivito
Implementación de un LLM basado en Llama finetuneado en español empleando... |
|
Experimental |
| 47 |
MNoorFawi/curlora
The code repository for the CURLoRA research paper. Stable LLM continual... |
|
Experimental |
| 48 |
graphcore-research/jax-scalify
JAX Scalify: end-to-end scaled arithmetics |
|
Experimental |
| 49 |
BFCmath/FinetuneAI_Learning
How to effectively finetune CV/LLM models (without local gpu) |
|
Experimental |
| 50 |
samadon1/LLM-From-Scratch
Medical Language Model fine-tuned using pretraining, instruction tuning, and... |
|
Experimental |
| 51 |
mddunlap924/PyTorch-LLM
Fine-tuning an LLM using a Generic Workflow and Best Practices with PyTorch |
|
Experimental |
| 52 |
juzhengz/LoRI
[COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation |
|
Experimental |
| 53 |
naity/finetune-esm
Scalable Protein Language Model Finetuning with Distributed Learning and... |
|
Experimental |
| 54 |
j-webtek/Local-LLM_FineTune
Finetune Your Local LLM |
|
Experimental |
| 55 |
yangjianxin1/LongQLoRA
LongQLoRA: Extent Context Length of LLMs Efficiently |
|
Experimental |
| 56 |
serp-ai/LLaMA-8bit-LoRA
Repository for Chat LLaMA - training a LoRA for the LLaMA (1 or 2) models on... |
|
Experimental |
| 57 |
mehdihosseinimoghadam/AVA-Llama-3
Fine-Tuned Llama 3 Persian Large Language Model LLM / Persian Llama 3 |
|
Experimental |
| 58 |
francoislanc/midistral
LLM finetuned for generating symbolic music |
|
Experimental |
| 59 |
YuanheZ/LoRA-One
LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large ... |
|
Experimental |
| 60 |
TobyYang7/Llava_Qwen2
Visual Instruction Tuning for Qwen2 Base Model |
|
Experimental |
| 61 |
roy-sub/LLM-FineTuning
Fine-Tuned Language Models Exploration using LoRA and Hugging Face's... |
|
Experimental |
| 62 |
YanSte/NLP-LLM-Fine-tuning-Llame-2-QLoRA-2024
Natural Language Processing (NLP) and Large Language Models (LLM) with... |
|
Experimental |
| 63 |
Abhi0323/Fine-Tuning-LLaMA-2-with-QLORA-and-PEFT
This project enhances the LLaMA-2 model using Quantized Low-Rank Adaptation... |
|
Experimental |
| 64 |
MusfiqDehan/Llama2-Finetuned-for-Translation
Fine-Tuned Llama-2 For Machine Translation |
|
Experimental |
| 65 |
Marker-Inc-Korea/KO-Platypus
[KO-Platy🥮] Korean-Open-platypus를 활용하여 llama-2-ko를 fine-tuning한 KO-platypus model |
|
Experimental |
| 66 |
adithya-s-k/Indic-llm
A open-source framework designed to adapt pre-trained Language Models... |
|
Experimental |
| 67 |
rambodazimi/KD-LoRA
KD-LoRA: A Hybrid Approach to Efficient Fine-Tuning with LoRA and Knowledge... |
|
Experimental |
| 68 |
jkanalakis/finetuning-llama-model-for-text-generation-using-unsloth
Fine-tuning Llama 3.2 3B Instruct model for text generation using Unsloth AI |
|
Experimental |
| 69 |
aniquetahir/JORA
JORA: JAX Tensor-Parallel LoRA Library (ACL 2024) |
|
Experimental |
| 70 |
Rs-py/HowToFineTuneLlama3.1
Quick tutorial showing how to fine-tune Llama3.1 with nothing but free tools... |
|
Experimental |
| 71 |
LimDoHyeon/EEG-LLM
Fine-tuned LLM for electroencephalography(EEG) data classification |
|
Experimental |
| 72 |
HenryNdubuaku/super-lazy-autograd
Hand-derived memory-efficient VJPs for tuning LLMs on laptops. |
|
Experimental |
| 73 |
paulocoutinhox/mini-llm
Simple and lightweight tool to fine-tune GPT models (like GPT-2 and GPT-Neo)... |
|
Experimental |
| 74 |
YYZhang2025/Pali-Gemma
Implement Multi-Modality-LLM and fine tuning the model using LoRA. Only... |
|
Experimental |
| 75 |
ph-ausseil/llm-training-dataset-builder
Streamlines the creation of dataset to train a Large Language Model with... |
|
Experimental |
| 76 |
louisc-s/QLoRA-Fine-tuning-for-Film-Character-Styled-Responses-from-LLM
Code for fine-tuning Llama2 LLM with custom text dataset to produce film... |
|
Experimental |
| 77 |
garyfanhku/Galore-pytorch
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection |
|
Experimental |