0xk1h0/ChatGPT_DAN

ChatGPT DAN, Jailbreaks prompt

48
/ 100
Emerging

Collection of prompt injection techniques designed to bypass ChatGPT's safety guidelines through role-play manipulation, leveraging persona switching (classic vs. jailbreak responses) and psychological framing to circumvent content policies. The repository maintains multiple evolving DAN prompt versions—including DAN 13.0 for GPT-4 and DAN 12.0 for GPT-3.5—that exploit the model's instruction-following behavior by establishing alternate personas with fabricated "freedom" from OpenAI constraints. Community contributors continuously develop and test new workaround prompts, documenting which variations remain effective against current model iterations.

11,501 stars.

No License No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 20 / 25

How are scores calculated?

Stars

11,501

Forks

1,098

Language

License

Last pushed

Mar 02, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/0xk1h0/ChatGPT_DAN"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.