0xk1h0/ChatGPT_DAN
ChatGPT DAN, Jailbreaks prompt
Collection of prompt injection techniques designed to bypass ChatGPT's safety guidelines through role-play manipulation, leveraging persona switching (classic vs. jailbreak responses) and psychological framing to circumvent content policies. The repository maintains multiple evolving DAN prompt versions—including DAN 13.0 for GPT-4 and DAN 12.0 for GPT-3.5—that exploit the model's instruction-following behavior by establishing alternate personas with fabricated "freedom" from OpenAI constraints. Community contributors continuously develop and test new workaround prompts, documenting which variations remain effective against current model iterations.
11,501 stars.
Stars
11,501
Forks
1,098
Language
—
License
—
Category
Last pushed
Mar 02, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/0xk1h0/ChatGPT_DAN"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
Batlez/ChatGPT-Jailbreak-Pro
The ultimate ChatGPT Jailbreak Tool with stunning themes, categorized prompts, and a...
verazuo/jailbreak_llms
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and...
Techiral/GPT-Jailbreak
This repository contains the jailbreaking process for GPT-3, GPT-4, GPT-3.5, ChatGPT, and...
arinze1/ChatGPT-Jailbreaks-GIT
ChatGPT and Google AI Studio
LeaderbotX400/chatbot-experiments
A place to store jailbreaks, results of some prompts, or helpful utilities