Direct Preference Optimization LLM Tools
Methods and implementations for training LLMs through preference learning without explicit reward models, including DPO variants, reference-free approaches, and token-level optimization techniques. Does NOT include general RLHF, reward model training, or non-preference-based fine-tuning approaches.
There are 12 direct preference optimization tools tracked. The highest-rated is codelion/pts at 35/100 with 146 stars.
Get all 12 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=llm-tools&subcategory=direct-preference-optimization&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Tool | Score | Tier |
|---|---|---|---|
| 1 |
codelion/pts
Pivotal Token Search |
|
Emerging |
| 2 |
dannylee1020/openpo
Building synthetic data for preference tuning |
|
Emerging |
| 3 |
DtYXs/Pre-DPO
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using... |
|
Experimental |
| 4 |
RLHFlow/Directional-Preference-Alignment
Directional Preference Alignment |
|
Experimental |
| 5 |
pspdada/Uni-DPO
[ICLR 2026] Official repository of "Uni-DPO: A Unified Paradigm for Dynamic... |
|
Experimental |
| 6 |
Rahulkumar010/microDPO
microDPO: A minimalist, pure PyTorch implementation of Direct Preference... |
|
Experimental |
| 7 |
line/sacpo
[NeurIPS 2024] SACPO (Stepwise Alignment for Constrained Policy Optimization) |
|
Experimental |
| 8 |
ikun-llm/ikun-DPO
偏好对齐训练 | Direct Preference Optimization 👍👎 |
|
Experimental |
| 9 |
liushunyu/awesome-direct-preference-optimization
A Survey of Direct Preference Optimization (DPO) |
|
Experimental |
| 10 |
codebywiam/fine-tuning-llm-dpo
This project demonstrates how to fine-tune a GPT-2 model using Direct... |
|
Experimental |
| 11 |
Anirvan-Krishna/safety-alignment-of-gpt2
A comparative study of Proximal Policy Optimization (PPO) for RLHF and... |
|
Experimental |
| 12 |
yflyzhang/RankPO
RankPO: Rank Preference Optimization |
|
Experimental |