TrustAI-laboratory/Learn-Prompt-Hacking

This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.

45
/ 100
Emerging

Covers practical attack vectors including ChatGPT jailbreaks, GPT Assistant prompt leaks, and prompt injection techniques, alongside defensive security measures for LLM applications. The curriculum integrates adversarial machine learning concepts with real-world GenAI development patterns, supported by research papers and conference materials. Targets both offensive security research and defensive mitigation strategies for organizations deploying large language models.

271 stars. No commits in the last 6 months.

Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 17 / 25

How are scores calculated?

Stars

271

Forks

34

Language

Jupyter Notebook

License

MIT

Last pushed

Apr 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/TrustAI-laboratory/Learn-Prompt-Hacking"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.