Anirvan-Krishna/safety-alignment-of-gpt2
A comparative study of Proximal Policy Optimization (PPO) for RLHF and Direct Preference Optimization (DPO) for safety alignment of GPT-2 Medium. This has been performed as a part of the course CS60216: Safety Fundamentals for Generative AI at IIT Kharagpur
Stars
—
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Anirvan-Krishna/safety-alignment-of-gpt2"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
codelion/pts
Pivotal Token Search
RLHFlow/Directional-Preference-Alignment
Directional Preference Alignment
dannylee1020/openpo
Building synthetic data for preference tuning
DtYXs/Pre-DPO
Pre-DPO: Improving Data Utilization in Direct Preference Optimization Using a Guiding Reference Model
Rahulkumar010/microDPO
microDPO: A minimalist, pure PyTorch implementation of Direct Preference Optimization. Inspired...