CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Curated directory aggregating jailbreak techniques, leaked system prompts, and prompt injection exploits across multiple LLM platforms, with organized sections for offensive attack vectors, defensive mitigation strategies, and vulnerability datasets. Serves as a living reference linking to external repositories, research implementations, and community-contributed prompt collections rather than hosting original code. Targets security researchers, red teamers, and practitioners developing LLM robustness testing and adversarial prompt engineering capabilities.
3,730 stars. Actively maintained with 1 commit in the last 30 days.
Stars
3,730
Forks
466
Language
HTML
License
GPL-3.0
Category
Last pushed
Mar 06, 2026
Commits (30d)
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/CyberAlbSecOP/Awesome_GPT_Super_Prompting"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
LouisShark/chatgpt_system_prompt
A collection of GPT system prompts and various prompt injection/leaking knowledge.
citiususc/smarty-gpt
A wrapper of LLMs that biases its behaviour using prompts and contexts in a transparent manner...
timqian/openprompt.co
Create. Use. Share. ChatGPT prompts
B3o/GPTS-Prompt-Collection
收集GPTS的prompt / Collect the prompt of GPTS
ClarkFieseln/tea2adt
Command-line utility for Remote Shell, Remote AI Prompt, Chat and File Transfer, that reads and...