TrustAI-laboratory/Learn-Prompt-Hacking
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
Covers practical attack vectors including ChatGPT jailbreaks, GPT Assistant prompt leaks, and prompt injection techniques, alongside defensive security measures for LLM applications. The curriculum integrates adversarial machine learning concepts with real-world GenAI development patterns, supported by research papers and conference materials. Targets both offensive security research and defensive mitigation strategies for organizations deploying large language models.
271 stars. No commits in the last 6 months.
Stars
271
Forks
34
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Apr 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/TrustAI-laboratory/Learn-Prompt-Hacking"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
utkusen/promptmap
a security scanner for custom LLM applications
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt