chatgpt_system_prompt and Awesome_GPT_Super_Prompting

These are ecosystem siblings—both are curated repositories documenting system prompts, injection techniques, and security vulnerabilities in GPT systems, serving as complementary knowledge bases for understanding and testing LLM prompt security rather than competing implementations.

chatgpt_system_prompt
65
Established
Maintenance 17/25
Adoption 10/25
Maturity 16/25
Community 22/25
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 10,443
Forks: 1,455
Downloads:
Commits (30d): 14
Language: HTML
License: MIT
Stars: 3,730
Forks: 466
Downloads:
Commits (30d): 1
Language: HTML
License: GPL-3.0
No Package No Dependents
No Package No Dependents

About chatgpt_system_prompt

LouisShark/chatgpt_system_prompt

A collection of GPT system prompts and various prompt injection/leaking knowledge.

Organizes extracted system prompts from custom GPTs and ChatGPT instances with indexed searchability via TOC.md and a local `idxtool` script for rapid lookups. Includes security documentation and vulnerability analysis patterns to help developers understand prompt injection attack vectors and defensive techniques. Automatically maintains a table of contents through GitHub Actions workflows, enabling community contributions of newly discovered prompts and exploitation methods.

About Awesome_GPT_Super_Prompting

CyberAlbSecOP/Awesome_GPT_Super_Prompting

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

Curated directory aggregating jailbreak techniques, leaked system prompts, and prompt injection exploits across multiple LLM platforms, with organized sections for offensive attack vectors, defensive mitigation strategies, and vulnerability datasets. Serves as a living reference linking to external repositories, research implementations, and community-contributed prompt collections rather than hosting original code. Targets security researchers, red teamers, and practitioners developing LLM robustness testing and adversarial prompt engineering capabilities.

Scores updated daily from GitHub, PyPI, and npm data. How scores work