Franekskc/gemma3-qa-finetuning

Comparing Full Fine-Tuning, LoRA, and Layer Freezing for extractive QA on SQuAD 1.1, using Gemma 3 (~1B) on a single GPU. Includes EM/F1 evaluation, peak VRAM/time tracking

12
/ 100
Experimental
No License No Package No Dependents
Maintenance 10 / 25
Adoption 1 / 25
Maturity 1 / 25
Community 0 / 25

How are scores calculated?

Stars

1

Forks

Language

Python

License

Last pushed

Jan 28, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Franekskc/gemma3-qa-finetuning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.