Miguell-J/Google-Competition-Gemma-2
Fine-tuning of Gemma 2 model in Google Competition using a dataset of Chinese poetry. The goal is to adapt the model to generate Chinese poetry in a classical style by training it on a subset of poems. The fine-tuning process leverages LoRA (Low-Rank Adaptation) for efficient model adaptation.
No commits in the last 6 months.
Stars
4
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Feb 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Miguell-J/Google-Competition-Gemma-2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
robitec97/gemma3.c
Gemma 3 pure inference in C
GURPREETKAURJETHRA/PaliGemma-Inference-and-Fine-Tuning
PaliGemma Inference and Fine Tuning
GURPREETKAURJETHRA/PaliGemma-FineTuning
PaliGemma FineTuning
LikithMeruvu/Gemma2B_Finetuning_Medium
This Repo contains How to Finetune Google's New Gemma LLm model using your custom instuction...
stabgan/biogemma
BioGemma — Google Gemma 3 1B fine-tuned on medical/biomedical corpus for clinical NLP tasks