Awesome_GPT_Super_Prompting and gpt
These are ecosystem siblings where one provides offensive security research (jailbreaks, prompt injection techniques) and the other provides defensive implementations (prompt engineering and security hardening), both addressing the same attack surface in LLM applications.
About Awesome_GPT_Super_Prompting
CyberAlbSecOP/Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Curated directory aggregating jailbreak techniques, leaked system prompts, and prompt injection exploits across multiple LLM platforms, with organized sections for offensive attack vectors, defensive mitigation strategies, and vulnerability datasets. Serves as a living reference linking to external repositories, research implementations, and community-contributed prompt collections rather than hosting original code. Targets security researchers, red teamers, and practitioners developing LLM robustness testing and adversarial prompt engineering capabilities.
About gpt
4ndr0666/gpt
A.I. Sorcery
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work