chatgpt_system_prompt and Awesome_GPT_Super_Prompting
These are ecosystem siblings—both are curated repositories documenting system prompts, injection techniques, and security vulnerabilities in GPT systems, serving as complementary knowledge bases for understanding and testing LLM prompt security rather than competing implementations.
About chatgpt_system_prompt
LouisShark/chatgpt_system_prompt
A collection of GPT system prompts and various prompt injection/leaking knowledge.
Organizes extracted system prompts from custom GPTs and ChatGPT instances with indexed searchability via TOC.md and a local `idxtool` script for rapid lookups. Includes security documentation and vulnerability analysis patterns to help developers understand prompt injection attack vectors and defensive techniques. Automatically maintains a table of contents through GitHub Actions workflows, enabling community contributions of newly discovered prompts and exploitation methods.
About Awesome_GPT_Super_Prompting
CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Curated directory aggregating jailbreak techniques, leaked system prompts, and prompt injection exploits across multiple LLM platforms, with organized sections for offensive attack vectors, defensive mitigation strategies, and vulnerability datasets. Serves as a living reference linking to external repositories, research implementations, and community-contributed prompt collections rather than hosting original code. Targets security researchers, red teamers, and practitioners developing LLM robustness testing and adversarial prompt engineering capabilities.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work