mirzayasirabdullahbaig07/Fine-Tuning-LLaMA-3.2-3B-Using-PEFT-LoRA

This project showcases parameter-efficient fine-tuning of the LLaMA 3.2 (3B) language model using PEFT (Parameter-Efficient Fine-Tuning) and LoRA (Low-Rank Adaptation). It is optimized for minimal resource usage and trained on a domain-specific dataset to enhance performance in specialized tasks.

18
/ 100
Experimental

No commits in the last 6 months.

No License Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 6 / 25
Maturity 1 / 25
Community 9 / 25

How are scores calculated?

Stars

18

Forks

2

Language

Jupyter Notebook

License

Last pushed

Jun 01, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/mirzayasirabdullahbaig07/Fine-Tuning-LLaMA-3.2-3B-Using-PEFT-LoRA"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.