Prompt Engineering Security Prompt Engineering Tools
There are 3 prompt engineering security tools tracked. The highest-rated is langgptai/LLM-Jailbreaks at 37/100 with 561 stars.
Get all 3 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=prompt-engineering&subcategory=prompt-engineering-security&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Tool | Score | Tier |
|---|---|---|---|
| 1 |
langgptai/LLM-Jailbreaks
LLM Jailbreaks, ChatGPT, Claude, Llama, DAN Prompts, Prompt Leaking |
|
Emerging |
| 2 |
rpidanny/llm-prompt-templates
Empower your LLM to do more than you ever thought possible with these... |
|
Emerging |
| 3 |
jalvarezz13/prompt.fail
prompt.fail explores prompt injection techniques in large language models... |
|
Experimental |