Direct Preference Optimization LLM Tools

Methods and implementations for training LLMs through preference learning without explicit reward models, including DPO variants, reference-free approaches, and token-level optimization techniques. Does NOT include general RLHF, reward model training, or non-preference-based fine-tuning approaches.

There are 12 direct preference optimization tools tracked. The highest-rated is codelion/pts at 35/100 with 146 stars.

Get all 12 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=llm-tools&subcategory=direct-preference-optimization&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 codelion/pts

Pivotal Token Search

35
Emerging
2 dannylee1020/openpo

Building synthetic data for preference tuning

30
Emerging
3 DtYXs/Pre-DPO

Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using...

28
Experimental
4 RLHFlow/Directional-Preference-Alignment

Directional Preference Alignment

25
Experimental
5 pspdada/Uni-DPO

[ICLR 2026] Official repository of "Uni-DPO: A Unified Paradigm for Dynamic...

23
Experimental
6 Rahulkumar010/microDPO

microDPO: A minimalist, pure PyTorch implementation of Direct Preference...

23
Experimental
7 line/sacpo

[NeurIPS 2024] SACPO (Stepwise Alignment for Constrained Policy Optimization)

21
Experimental
8 ikun-llm/ikun-DPO

偏好对齐训练 | Direct Preference Optimization 👍👎

14
Experimental
9 liushunyu/awesome-direct-preference-optimization

A Survey of Direct Preference Optimization (DPO)

12
Experimental
10 codebywiam/fine-tuning-llm-dpo

This project demonstrates how to fine-tune a GPT-2 model using Direct...

11
Experimental
11 Anirvan-Krishna/safety-alignment-of-gpt2

A comparative study of Proximal Policy Optimization (PPO) for RLHF and...

11
Experimental
12 yflyzhang/RankPO

RankPO: Rank Preference Optimization

11
Experimental