LLM Finetuning Frameworks
Comprehensive platforms and toolkits for fine-tuning pre-trained large language models on custom datasets, including training orchestration, dataset curation, and model optimization. Does NOT include inference frameworks, model deployment tools, or general LLM training from scratch.
There are 85 llm finetuning frameworks tracked. 2 score above 50 (established tier). The highest-rated is limix-ldm-ai/LimiX at 54/100 with 3,340 stars.
Get all 85 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=ml-frameworks&subcategory=llm-finetuning-frameworks&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Framework | Score | Tier |
|---|---|---|---|
| 1 |
limix-ldm-ai/LimiX
LimiX: Unleashing Structured-Data Modeling Capability for Generalist... |
|
Established |
| 2 |
XXO47OXX/layer-scan
Automated LLM layer duplication config scanner — find the optimal (i,j) for... |
|
Established |
| 3 |
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data. |
|
Emerging |
| 4 |
YalaLab/pillar-finetune
Finetuning framework for Pillar medical imaging models. |
|
Emerging |
| 5 |
Kakz/prometheus-llm
PrometheusLLM is a unique transformer architecture inspired by dignity and... |
|
Emerging |
| 6 |
mcp-tool-shop-org/backpropagate
Headless LLM fine-tuning in 3 lines — smart defaults, VRAM-aware batch... |
|
Emerging |
| 7 |
google-research/plur
PLUR (Programming-Language Understanding and Repair) is a collection of... |
|
Emerging |
| 8 |
YalaLab/pillar-pretrain
This repository contains the pretraining code for the Pillar-0 model. |
|
Emerging |
| 9 |
Samyak-777/nomodel
The world's most accurate LLM. It achieves 0% hallucination rate by... |
|
Emerging |
| 10 |
santos-sanz/mlx-lora-finetune-template
Template for fine-tuning LLMs with LoRA using Apple MLX on Mac Silicon |
|
Emerging |
| 11 |
thuml/LogME
Code release for "LogME: Practical Assessment of Pre-trained Models for... |
|
Emerging |
| 12 |
furkantanyol/aitelier
An opinionated workflow tool for managing the full lifecycle of fine-tuning datasets |
|
Emerging |
| 13 |
Love-Asuka/Etude-LLM
"Etude"一词源自法语,原意为"研习曲"或"练习曲",在音乐领域特指为提高演奏技巧而创作的短小精悍的乐曲。在本项目中,"Etude... |
|
Emerging |
| 14 |
joisino/reeval-wmd
Code for "Re-evaluating Word Mover’s Distance" (ICML 2022) |
|
Emerging |
| 15 |
Cloud-CV/diverse-beam-search
:mag: :shipit: Decoding Diverse Solutions from Neural Sequence Models |
|
Experimental |
| 16 |
P1ayer-1/Llama-LibTorch
Llama causal LM fully recreated in LibTorch. Designed to be used in Unreal Engine 5 |
|
Experimental |
| 17 |
ashworks1706/llm-from-scratch
A theoretical and practical deep dive into Large Language Models and their... |
|
Experimental |
| 18 |
gruai/koifish
A c++ framework on efficient training & fine-tuning LLMs |
|
Experimental |
| 19 |
yigitkonur/cli-finetune-dataset
weighted category-balanced dataset builder for LLM fine-tuning |
|
Experimental |
| 20 |
Khaeldur/NeuralForge
On-device LLM fine-tuning for Apple Silicon (ANE) |
|
Experimental |
| 21 |
EngineeringSoftware/CoditT5
CoditT5: Pretraining for Source Code and Natural Language Editing |
|
Experimental |
| 22 |
tk-rusch/LEM
Official code for Long Expressive Memory (ICLR 2022, Spotlight) |
|
Experimental |
| 23 |
mcaimi/flan-t5-finetune-ita
This repository has been moved to... |
|
Experimental |
| 24 |
KazKozDev/synth-dataset-kit
CLI tool for generating high-quality synthetic datasets for LLM fine-tuning. |
|
Experimental |
| 25 |
jesusvilela/IGBundle-LLM
IGBundle LLM is an experimental framework for adapting Large Language Models... |
|
Experimental |
| 26 |
raajmandale/mos-parameter-golf
CRS-LM: Structure-aware context reduction for tiny language models under... |
|
Experimental |
| 27 |
yadavsidhant/quickllm
QuickLLM: Fast and Easy Fine-tuning for Popular Language Models |
|
Experimental |
| 28 |
loevlie/neuropt
LLM-guided ML optimization. Point it at a training script, it reads the... |
|
Experimental |
| 29 |
eilamc14/Simplify-This
Comparative Analysis of Prompt-Based and Fine-Tuned LLMs |
|
Experimental |
| 30 |
Radket27/Simple-LLM
Simple LLM |
|
Experimental |
| 31 |
MaheshJakkala/llm-c-transformer
Transformer LLM from scratch in C: custom tensor lib, INT8 post-training... |
|
Experimental |
| 32 |
Yog-Sotho/Brainbrew
A simple GUI tool that generates LLM training datasets through model... |
|
Experimental |
| 33 |
Lukin-GCST/mgs-llm-stability-sensor
Geometric stability sensor for detecting hallucinations in LLM outputs |
|
Experimental |
| 34 |
rickiepark/fine-tuning-llm
|
|
Experimental |
| 35 |
rpatrik96/hallmark
HALLMARK: Citation hallucination detection benchmark for ML papers — 2,525... |
|
Experimental |
| 36 |
mamoun78444/llm-json
Parse JSON quickly using a fast, recursive-descent parser designed for... |
|
Experimental |
| 37 |
frafalcone/llm-design-train
A PyTorch implementation of a LLaMA-inspired LLM, featuring GQA, RoPE, and SwiGLU. |
|
Experimental |
| 38 |
Nagavenkatasai7/llm-forge
Config-driven, YAML-first open-source LLM training platform. Fine-tune... |
|
Experimental |
| 39 |
shiv81500/Mobius-LLM-Fine-tuning-Engine
🔧 Fine-tune large language models locally on your data, export to GGUF, and... |
|
Experimental |
| 40 |
machelreid/lewis
Official code for LEWIS, from: "LEWIS: Levenshtein Editing for Unsupervised... |
|
Experimental |
| 41 |
jordandeklerk/Starcoder2-Finetune-Code-Completion
Finetuning Starcoder2-3B for Code Completion on a single A100 GPU |
|
Experimental |
| 42 |
eshanized/SLMGen
Fine-tune small language models the right way — dataset intelligence,... |
|
Experimental |
| 43 |
BioDT/bfm-finetune
Finetune routines for the Biodiveristy Foundation Model |
|
Experimental |
| 44 |
machinelearningnuremberg/QuickTune
[ICLR2024] Quick-Tune: Quickly Learning Which Pretrained Model to Finetune and How |
|
Experimental |
| 45 |
OSU-MLB/Fine-Tuning-Is-Fine-If-Calibrated
Official Implementation of "Fine-Tuning is Fine, if Calibrated.", NeurIPS 2024 |
|
Experimental |
| 46 |
Aliyan-12/deepseek-r1-finetuning-using-qlora-for-medical-reasoning---colab
PEFT(Parameter Efficient Fine-tuning) workflow for Unsloth/DeepSeek-R1 on... |
|
Experimental |
| 47 |
nshkrdotcom/vllm
vLLM - High-throughput, memory-efficient LLM inference engine with... |
|
Experimental |
| 48 |
hzwwww/LLM-From-Zero-to-Hero
这是一个从零开始系统学习LLM的实战项目,通过一系列精心设计的Jupyter Notebook,带你从基础理论到核心算法,逐步掌握LLM的关键技术 |
|
Experimental |
| 49 |
teddante/Ensemble
A modern web application that queries multiple Large Language Models... |
|
Experimental |
| 50 |
alexisbriandev/mini-llm
A modular, educational, and high-performance implementation of a... |
|
Experimental |
| 51 |
Hashmat02/Fine-Tuning-LLaMA-2-for-Toxicity-Classification
Fine-tuning LLaMA 2 for toxicity classification using a balanced Kaggle... |
|
Experimental |
| 52 |
Scottcjn/pse-vcipher-collapse
Non-bijunctive attention collapse for LLM inference — POWER8 hardware AES... |
|
Experimental |
| 53 |
Ari-S-123/pii-masking
Improved PII masking performance in adversarial conditions and diverse... |
|
Experimental |
| 54 |
aimonlabs/hallucination-detection-model
HalluciNot: Hallucination Detection Through Context and Common Knowledge Verification |
|
Experimental |
| 55 |
SunPCSolutions/FinetuneOrch
FineTuneOrch is a web-based orchestration dashboard that simplifies... |
|
Experimental |
| 56 |
originaonxi/asm-replication
Replication study — Adaptive Skill Modeling for multi-task LLM training.... |
|
Experimental |
| 57 |
MarsPain/easy_llm.cpp
A minimal C++ framework for learning and understanding the LLM inference... |
|
Experimental |
| 58 |
KouseiA/AGI_HER_LLM
🚀 Adapt large language models continuously with task-agnostic methods,... |
|
Experimental |
| 59 |
barbidoux/Lumi-Lab
A simple mini LLM training project to train your LLM from scratch, from the... |
|
Experimental |
| 60 |
asoloveii/nano-llm
An implementation of a custom language model from scratch in PyTorch.... |
|
Experimental |
| 61 |
harshi1111/multi-granular-llm-analysis
Production-ready system for detecting WHERE LLM responses fail, not just IF... |
|
Experimental |
| 62 |
MukundaKatta/grammarprobe
GrammarProbe — Universal Grammar Detector. Test whether LLMs have... |
|
Experimental |
| 63 |
Mikeore/lumi-arch-research
Public research notes on compact architecture exploration for efficient... |
|
Experimental |
| 64 |
atasoglu/awesome-turkish-vlm
A curated list of models, datasets and other useful resources for Turkish... |
|
Experimental |
| 65 |
Saivineeth147/LLM-Compass
The ultimate collection of resources for building, evaluating, and... |
|
Experimental |
| 66 |
NeuroRaptor/clip-hallucination-detection
Evidence-based hallucination detection framework for CLIP vision-language... |
|
Experimental |
| 67 |
Wasisange/llm-finetuning-toolkit
A toolkit for efficient fine-tuning of large language models on custom datasets. |
|
Experimental |
| 68 |
Restroulner/LLM-Fine-tuning-Toolkit
A comprehensive toolkit for fine-tuning Large Language Models (LLMs) with... |
|
Experimental |
| 69 |
dakshjain-1616/gemma-3-12b-medical-sft
Fine-tunes google/gemma-3-12b-it with Unsloth SFT and LoRA (r=32, alpha=64)... |
|
Experimental |
| 70 |
Bender1011001/dual-system-architecture
Geometric sidecar for LLMs — uncensored + structured reasoning, zero... |
|
Experimental |
| 71 |
RodrigoVargasMolina/liteweight-pony-trainer-8g-safetensor
Lightweight SDXL LoRA trainer optimized for 8GB VRAM GPUs. GUI with... |
|
Experimental |
| 72 |
Joe-Naz01/SFTT_Trainer
This repository contains a comprehensive pipeline for fine-tuning Large... |
|
Experimental |
| 73 |
Prajit-Rahul/Lightweight-Multilingual-Translation-for-Edge-Devices
LoRA, distillation, quantization, and pruning for edge-friendly multilingual... |
|
Experimental |
| 74 |
sacredvoid/alignrl
LLM post-training playbook: SFT, GRPO, DPO, eval, and inference. pip install alignrl |
|
Experimental |
| 75 |
ryan-air/Alpaca-3B-Fine-Tuned
In this project, I have provided code and a Colaboratory notebook that... |
|
Experimental |
| 76 |
Optum/long-medical-document-lms
Explain and train language models that extract information from long medical... |
|
Experimental |
| 77 |
matjsz/shard
Shard is an open-source LLM tuning package for Python, which can turn any... |
|
Experimental |
| 78 |
vaibhavnayak30/llm_finetuning
This repository offers concise code for LLM fine-tuning to efficiently adapt... |
|
Experimental |
| 79 |
hesamsheikh/AnimAI-Trainer
Train an LLM to generate cracked Manim animations for mathematical concepts. |
|
Experimental |
| 80 |
slsandarubot/DeGAML-LLM
🚀 Enhance large language models with DeGAML-LLM, a meta-learning approach... |
|
Experimental |
| 81 |
kantkrishan0206-crypto/LLM-building-a-Large-Language-Model-LLM-
is a comprehensive, educational project dedicated to building a Large... |
|
Experimental |
| 82 |
umarmk/llm-fine-tuning-phi3
Fine tune LLM for people information extraction using Unsloth |
|
Experimental |
| 83 |
mateluky/llm-heart-failure-analysis
Research project evaluating large language models (LLMs) for heart failure... |
|
Experimental |
| 84 |
beviah/GENbAIs
Bio-inspired adapters that improve foundation models beyond LoRA... |
|
Experimental |
| 85 |
jordandeklerk/OpenCodeInterpreter-Finetune-SQL
Fine-tuning coding LLM OpenCodeInterpreter-DS-6.7B for Text-to-SQL Code... |
|
Experimental |