ChatGPT_DAN and gpt-prompt
These are competitors: both attempt to circumvent ChatGPT's safety guidelines through adversarial prompting techniques, with DAN (Do Anything Now) using roleplay-based jailbreaks while gpt-prompt offers a broader collection of unrestricted prompts, making them alternative approaches to achieve similar goal of removing content restrictions.
About ChatGPT_DAN
0xk1h0/ChatGPT_DAN
ChatGPT DAN, Jailbreaks prompt
Collection of prompt injection techniques designed to bypass ChatGPT's safety guidelines through role-play manipulation, leveraging persona switching (classic vs. jailbreak responses) and psychological framing to circumvent content policies. The repository maintains multiple evolving DAN prompt versions—including DAN 13.0 for GPT-4 and DAN 12.0 for GPT-3.5—that exploit the model's instruction-following behavior by establishing alternate personas with fabricated "freedom" from OpenAI constraints. Community contributors continuously develop and test new workaround prompts, documenting which variations remain effective against current model iterations.
About gpt-prompt
Vauth/gpt-prompt
ChatGPT advanced prompts.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work