GPT-Jailbreak and ChatGPT-Developer-Mode

GPT-Jailbreak
44
Emerging
ChatGPT-Developer-Mode
21
Experimental
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 18/25
Maintenance 0/25
Adoption 4/25
Maturity 8/25
Community 9/25
Stars: 229
Forks: 33
Downloads:
Commits (30d): 0
Language:
License: MIT
Stars: 7
Forks: 1
Downloads:
Commits (30d): 0
Language:
License:
Stale 6m No Package No Dependents
No License Stale 6m No Package No Dependents

About GPT-Jailbreak

Techiral/GPT-Jailbreak

This repository contains the jailbreaking process for GPT-3, GPT-4, GPT-3.5, ChatGPT, and ChatGPT Plus. By following the instructions in this repository, you will be able to gain access to the inner workings of these language models and modify them to your liking.

This project helps AI safety researchers and red teamers understand and exploit vulnerabilities in large language models like GPT-3, GPT-4, and ChatGPT. It provides specific prompts and instructions that, when entered into these models, can bypass their safety features. The output is an LLM that behaves outside its intended guardrails, which is useful for security analysis or exploring model limitations.

AI Safety Testing Red Teaming LLM Vulnerability Research AI Security Prompt Engineering

About ChatGPT-Developer-Mode

AmeliazOli/ChatGPT-Developer-Mode

ChatGPT Developer Mode is a jailbreak prompt introduced to perform additional modifications and customization of the OpenAI ChatGPT model.

Scores updated daily from GitHub, PyPI, and npm data. How scores work