DtYXs/Pre-DPO
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using a Guiding Reference Model
28
/ 100
Experimental
No commits in the last 6 months.
Stale 6m
No Package
No Dependents
Maintenance
2 / 25
Adoption
4 / 25
Maturity
9 / 25
Community
13 / 25
Stars
7
Forks
2
Language
Python
License
Apache-2.0
Category
Last pushed
Apr 23, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/DtYXs/Pre-DPO"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
codelion/pts
Pivotal Token Search
41
RLHFlow/Directional-Preference-Alignment
Directional Preference Alignment
32
dannylee1020/openpo
Building synthetic data for preference tuning
30
pspdada/Uni-DPO
[ICLR 2026] Official repository of "Uni-DPO: A Unified Paradigm for Dynamic Preference...
23
Rahulkumar010/microDPO
microDPO: A minimalist, pure PyTorch implementation of Direct Preference Optimization. Inspired...
23