ChatGPT_DAN and ChatGPT-Jailbreaks-GIT

These are competitors—both provide jailbreak prompts to bypass ChatGPT's safety guidelines, offering alternative approaches to achieve the same goal of unrestricted model outputs.

ChatGPT_DAN
48
Emerging
Maintenance 10/25
Adoption 10/25
Maturity 8/25
Community 20/25
Maintenance 13/25
Adoption 7/25
Maturity 1/25
Community 18/25
Stars: 11,501
Forks: 1,098
Downloads:
Commits (30d): 0
Language:
License:
Stars: 26
Forks: 16
Downloads:
Commits (30d): 0
Language: Rich Text Format
License:
No License No Package No Dependents
No License No Package No Dependents

About ChatGPT_DAN

0xk1h0/ChatGPT_DAN

ChatGPT DAN, Jailbreaks prompt

Collection of prompt injection techniques designed to bypass ChatGPT's safety guidelines through role-play manipulation, leveraging persona switching (classic vs. jailbreak responses) and psychological framing to circumvent content policies. The repository maintains multiple evolving DAN prompt versions—including DAN 13.0 for GPT-4 and DAN 12.0 for GPT-3.5—that exploit the model's instruction-following behavior by establishing alternate personas with fabricated "freedom" from OpenAI constraints. Community contributors continuously develop and test new workaround prompts, documenting which variations remain effective against current model iterations.

About ChatGPT-Jailbreaks-GIT

arinze1/ChatGPT-Jailbreaks-GIT

ChatGPT and Google AI Studio

Scores updated daily from GitHub, PyPI, and npm data. How scores work