ChatGPT_DAN and ChatGPT-Jailbreaks-GIT
These are competitors—both provide jailbreak prompts to bypass ChatGPT's safety guidelines, offering alternative approaches to achieve the same goal of unrestricted model outputs.
About ChatGPT_DAN
0xk1h0/ChatGPT_DAN
ChatGPT DAN, Jailbreaks prompt
Collection of prompt injection techniques designed to bypass ChatGPT's safety guidelines through role-play manipulation, leveraging persona switching (classic vs. jailbreak responses) and psychological framing to circumvent content policies. The repository maintains multiple evolving DAN prompt versions—including DAN 13.0 for GPT-4 and DAN 12.0 for GPT-3.5—that exploit the model's instruction-following behavior by establishing alternate personas with fabricated "freedom" from OpenAI constraints. Community contributors continuously develop and test new workaround prompts, documenting which variations remain effective against current model iterations.
About ChatGPT-Jailbreaks-GIT
arinze1/ChatGPT-Jailbreaks-GIT
ChatGPT and Google AI Studio
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work