vaibhavnayak30/llm_finetuning
This repository offers concise code for LLM fine-tuning to efficiently adapt pre-trained models. It covers key LLM fine-tuning techniques including LoRA and QLoRA and other PEFT methods for adapting LLMs by training a subset of parameters, and full fine-tuning for maximum performance.
No commits in the last 6 months.
Stars
—
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jul 27, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/vaibhavnayak30/llm_finetuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
limix-ldm-ai/LimiX
LimiX: Unleashing Structured-Data Modeling Capability for Generalist Intelligence...
XXO47OXX/layer-scan
Automated LLM layer duplication config scanner — find the optimal (i,j) for any model + task
tatsu-lab/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
google-research/plur
PLUR (Programming-Language Understanding and Repair) is a collection of source code datasets...
thuml/LogME
Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML...