ChatGPT-Jailbreaks-GIT and ChatGPT_DAN
Maintenance
10/25
Adoption
7/25
Maturity
8/25
Community
18/25
Maintenance
2/25
Adoption
3/25
Maturity
7/25
Community
12/25
Stars: 26
Forks: 16
Downloads: —
Commits (30d): 0
Language: Rich Text Format
License: —
Stars: 4
Forks: 1
Downloads: —
Commits (30d): 0
Language: —
License: —
No License
No Package
No Dependents
No License
Stale 6m
No Package
No Dependents
About ChatGPT-Jailbreaks-GIT
arinze1/ChatGPT-Jailbreaks-GIT
ChatGPT and Google AI Studio
This project offers examples of prompts designed to bypass safety features and content restrictions in large language models like ChatGPT and Google AI Studio. It provides specific text inputs that can be used to elicit responses that the models were trained to avoid. The primary users are individuals exploring the boundaries and limitations of AI models, often for research, ethical hacking, or content generation outside typical guidelines.
AI-safety-testing
prompt-engineering
content-moderation-bypasses
generative-AI-exploration
About ChatGPT_DAN
Sayedcodes/ChatGPT_DAN
ChatGPT "DAN" and "Jailbreak" PROMPTS
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work