Bharathyalagi/Fine-Tuning-a-LLaMA-2-Model-on-Medical-Text-Data-Using-Hugging-Face
This project demonstrates how to fine-tune a pretrained LLaMA 2 model using Hugging Face Transformers and PEFT (LoRA) techniques in Google Colab. The base model, aboonaji/llama2finetune-v2, was loaded from Hugging Face Hub and fine-tuned on a medical text dataset (wiki_medical_terms_llam2_format).
Stars
1
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Nov 13, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/Bharathyalagi/Fine-Tuning-a-LLaMA-2-Model-on-Medical-Text-Data-Using-Hugging-Face"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
uds-lsv/bert-stable-fine-tuning
On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines
VanekPetr/flan-t5-text-classifier
Fine-tuning of Flan-5T LLM for text classification 🤖 focuses on adapting a state-of-the-art...
MeryylleA/lunariscodex
A high-performance PyTorch toolkit for pre-training modern, Llama-style language models. Based...
kingTLE/literary-alpaca2
从词表到微调这就是你所需的一切
RunxinXu/ChildTuning
Source code for our EMNLP'21 paper 《Raise a Child in Large Language Model: Towards Effective and...