viniciusds2020/tinyllama-finetuning
Script de finetuning do modelo TinyLlama-1.1B usando LoRA (Low-Rank Adaptation) otimizado para rodar em CPU. Utiliza o dataset OpenAssistant Guanaco e as bibliotecas Hugging Face Transformers, PEFT e TRL.
Stars
—
Forks
—
Language
Python
License
—
Category
Last pushed
Jan 30, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/viniciusds2020/tinyllama-finetuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
gustavecortal/gpt-j-fine-tuning-example
Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
Ebimsv/LLM-Lab
Pretraining and Finetuning Language Model
msmrexe/pytorch-lora-from-scratch
A from-scratch PyTorch implementation of Low-Rank Adaptation (LoRA) to efficiently fine-tune...
linhaowei1/Fine-tuning-Scaling-Law
🌹[ICML 2024] Selecting Large Language Model to Fine-tune via Rectified Scaling Law
aamanlamba/phi3-tune-payments
Bidirectional fine-tuning of Microsoft's Phi-3-Mini model for payment transaction processing...