codebywiam/fine-tuning-llm-dpo
This project demonstrates how to fine-tune a GPT-2 model using Direct Preference Optimization (DPO) with the Hugging Face trl (Transformer Reinforcement Learning) library.
No commits in the last 6 months.
Stars
—
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jun 06, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/codebywiam/fine-tuning-llm-dpo"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
codelion/pts
Pivotal Token Search
RLHFlow/Directional-Preference-Alignment
Directional Preference Alignment
dannylee1020/openpo
Building synthetic data for preference tuning
DtYXs/Pre-DPO
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using a Guiding Reference Model
Rahulkumar010/microDPO
microDPO: A minimalist, pure PyTorch implementation of Direct Preference Optimization. Inspired...