MadhanMohanReddy2301/gemma-Instruct-2b-Finetuning-on-alpaca
This project demonstrates the steps required to fine-tune the Gemma model for tasks like code generation. We use qLora quantization to reduce memory usage and the SFTTrainer from the trl library for supervised fine-tuning.
No commits in the last 6 months.
Stars
—
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jun 30, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/MadhanMohanReddy2301/gemma-Instruct-2b-Finetuning-on-alpaca"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
GURPREETKAURJETHRA/PaliGemma-Inference-and-Fine-Tuning
PaliGemma Inference and Fine Tuning
LikithMeruvu/Gemma2B_Finetuning_Medium
This Repo contains How to Finetune Google's New Gemma LLm model using your custom instuction...
vdt104/Finetune_LLM_Gemma-2b-it
This project involves fine-tuning the Gemma-2b-it model for the specific task of generating code.
ahmeterdempmk/Gemma2-2B-E-Commerce-Based-Fine-Tuning
Gemma2 2B model that fine tuned with an e-commerce data.
Miguell-J/Google-Competition-Gemma-2
Fine-tuning of Gemma 2 model in Google Competition using a dataset of Chinese poetry. The goal...