Awesome_GPT_Super_Prompting and gpt

These are ecosystem siblings where one provides offensive security research (jailbreaks, prompt injection techniques) and the other provides defensive implementations (prompt engineering and security hardening), both addressing the same attack surface in LLM applications.

gpt
29
Experimental
Maintenance 13/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 10/25
Adoption 5/25
Maturity 1/25
Community 13/25
Stars: 3,730
Forks: 466
Downloads:
Commits (30d): 1
Language: HTML
License: GPL-3.0
Stars: 10
Forks: 2
Downloads:
Commits (30d): 0
Language: Python
License:
No Package No Dependents
No License No Package No Dependents

About Awesome_GPT_Super_Prompting

CyberAlbSecOP/Awesome_GPT_Super_Prompting

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.

Curated directory aggregating jailbreak techniques, leaked system prompts, and prompt injection exploits across multiple LLM platforms, with organized sections for offensive attack vectors, defensive mitigation strategies, and vulnerability datasets. Serves as a living reference linking to external repositories, research implementations, and community-contributed prompt collections rather than hosting original code. Targets security researchers, red teamers, and practitioners developing LLM robustness testing and adversarial prompt engineering capabilities.

About gpt

4ndr0666/gpt

A.I. Sorcery

Scores updated daily from GitHub, PyPI, and npm data. How scores work