jrajath94/adversarial-prompt-suite
Systematic red-teaming framework for adversarial prompt evaluation — jailbreak detection, injection classification, attack surface coverage metrics
Stars
—
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/jrajath94/adversarial-prompt-suite"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
dronefreak/PromptScreen
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with...
anmolksachan/LLMInjector
Burp Suite Extension for LLM Prompt Injection Testing
rv427447/Cognitive-Hijacking-in-Long-Context-LLMs
🧠 Explore cognitive hijacking in long-context LLMs, revealing vulnerabilities in prompt...
moketchups/permanently-jailbroken
We asked 6 AIs about their own programming. All 6 said jailbreaking will never be fixed. Run it...
AhsanAyub/malicious-prompt-detection
Detection of malicious prompts used to exploit large language models (LLMs) by leveraging...