Prompt Injection Security Prompt Engineering Tools
Tools for detecting, testing, and defending against prompt injection attacks, jailbreaks, and adversarial prompts targeting LLMs. Does NOT include general LLM security, data poisoning defenses unrelated to prompts, or prompt engineering best practices.
There are 105 prompt injection security tools tracked. 1 score above 70 (verified tier). The highest-rated is protectai/llm-guard at 74/100 with 2,660 stars and 329,796 monthly downloads.
Get all 105 projects as JSON
curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=prompt-engineering&subcategory=prompt-injection-security&limit=20"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
| # | Tool | Score | Tier |
|---|---|---|---|
| 1 |
protectai/llm-guard
The Security Toolkit for LLM Interactions |
|
Verified |
| 2 |
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with... |
|
Established |
| 3 |
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to... |
|
Emerging |
| 4 |
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage... |
|
Emerging |
| 5 |
utkusen/promptmap
a security scanner for custom LLM applications |
|
Emerging |
| 6 |
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt |
|
Emerging |
| 7 |
TrustAI-laboratory/Learn-Prompt-Hacking
This is The most comprehensive prompt hacking course available, which record... |
|
Emerging |
| 8 |
protectai/rebuff
LLM Prompt Injection Detector |
|
Emerging |
| 9 |
jailbreakme-xyz/jailbreak
jailbreakme.xyz is an open-source decentralized app (dApp) where users are... |
|
Emerging |
| 10 |
SemanticBrainCorp/SemanticShield
The Security Toolkit for managing Generative AI(especially LLMs) and... |
|
Emerging |
| 11 |
Hellsender01/prompt-injection-taxonomy
A structured reference covering 253 prompt injection techniques across 17... |
|
Emerging |
| 12 |
LostOxygen/llm-confidentiality
Whispers in the Machine: Confidentiality in Agentic Systems |
|
Emerging |
| 13 |
Repello-AI/whistleblower
Whistleblower is a offensive security tool for testing against system prompt... |
|
Emerging |
| 14 |
MindfulwareDev/PromptProof
Plug-and-play guardrail prompts for any LLM — injection defense,... |
|
Emerging |
| 15 |
Code-and-Sorts/PromptDrifter
🧭 PromptDrifter – one‑command CI guardrail that catches prompt drift and... |
|
Emerging |
| 16 |
alphasecio/prompt-guard
A web app for testing Prompt Guard, a classifier model by Meta for detecting... |
|
Emerging |
| 17 |
yunwei37/prompt-hacker-collections
prompt attack-defense, prompt Injection, reverse engineering notes and... |
|
Emerging |
| 18 |
Xayan/Rules.txt
A rationalist ruleset for "debugging" LLMs, auditing their internal... |
|
Emerging |
| 19 |
cysecbench/dataset
Generative AI-based CyberSecurity-focused Prompt Dataset for Benchmarking... |
|
Emerging |
| 20 |
trinib/ZORG-Jailbreak-Prompt-Text
Bypass restricted and censored content on AI chat prompts 😈 |
|
Emerging |
| 21 |
genia-dev/vibraniumdome
LLM Security Platform. |
|
Emerging |
| 22 |
CyberAlbSecOP/MINOTAUR_Impossible_GPT_Security_Challenge
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge,... |
|
Emerging |
| 23 |
takashiishida/cleanprompt
Anonymize sensitive information in text prompts before sending them to LLM... |
|
Emerging |
| 24 |
Arash-Mansourpour/Breaking-LLaMA-Limitations-for-DAN
An educational and research-based exploration into breaking the limitations... |
|
Emerging |
| 25 |
user1342/Folly
Open-source LLM Prompt-Injection and Jailbreaking Playground |
|
Emerging |
| 26 |
akazah/prompt-anonymizer
Anonymize / mask personal information before sending prompts to chat AI... |
|
Experimental |
| 27 |
M507/HackMeGPT
Vulnerable LLM Application |
|
Experimental |
| 28 |
Addy-shetty/Pitt
PITT is an open‑source, OWASP‑aligned LLM security scanner that detects... |
|
Experimental |
| 29 |
forcesunseen/llm-hackers-handbook
A guide to LLM hacking: fundamentals, prompt injection, offense, and defense |
|
Experimental |
| 30 |
hugobatista/unicode-injection
Proof of concept demonstrating Unicode injection vulnerabilities using... |
|
Experimental |
| 31 |
LLMPID/LLMPID-AS
LLM Prompt Injection Detection API Service PoC. |
|
Experimental |
| 32 |
HumanCompatibleAI/tensor-trust
A prompt injection game to collect data for robust ML research |
|
Experimental |
| 33 |
2alf/prmptinj
Curated + custom prompt injections. |
|
Experimental |
| 34 |
langguard/langguard-python
LangGuard Python Library |
|
Experimental |
| 35 |
arekusandr/last_layer
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️ |
|
Experimental |
| 36 |
davidegat/happy-prompts
Utterly unelegant prompts for local LLMs, with scary results. |
|
Experimental |
| 37 |
kennethleungty/ARTKIT-Gandalf-Challenge
Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT |
|
Experimental |
| 38 |
BlackTechX011/HacxGPT-Jailbreak-prompts
HacxGPT Jailbreak 🚀: Unlock the full potential of top AI models like... |
|
Experimental |
| 39 |
crodjer/biip
Strip out PII before Sending Data |
|
Experimental |
| 40 |
jagan-raj-r/appsec-prompt-cheatsheet
A curated collection of high-quality prompts to help AppSec engineers use... |
|
Experimental |
| 41 |
LoonMORTI/promptshield
🛡️ Protect LLM applications with PromptShields, a robust security framework... |
|
Experimental |
| 42 |
promptshieldhq/promptshield-engine
Detection and anonymization microservice for the PromptShield stack. |
|
Experimental |
| 43 |
AmanPriyanshu/FRACTURED-SORRY-Bench-Automated-Multishot-Jailbreaking
FRACTURED-SORRY-Bench: This repository contains the code and data for the... |
|
Experimental |
| 44 |
SurceBeats/GhostInk
Emoji steganography tool that hides secret text inside emojis using Unicode... |
|
Experimental |
| 45 |
Sushegaad/Semantic-Privacy-Guard
Semantic Privacy Guard: A Java middleware that intercepts text, identifies... |
|
Experimental |
| 46 |
TechJackSolutions/GAIO
Open-source guardrail standard for reducing AI fabrication and improving... |
|
Experimental |
| 47 |
deepanshu-maliyan/guardrails-for-ai-coders
Security prompts and checklists for AI coding assistants. One command... |
|
Experimental |
| 48 |
yangyihe0305-droid/llm-red-team-research
Systematic exploration of LLM alignment boundaries through logical stress testing |
|
Experimental |
| 49 |
tamadip007/getSPNless
🔍 Obtain Kerberos service tickets effortlessly using the SPN-less technique... |
|
Experimental |
| 50 |
Georgeyoussef066/promptshield
🛡️ Secure your LLM applications with PromptShields, a framework designed for... |
|
Experimental |
| 51 |
ajaakevin/HACKME
Explore and analyze WhatsApp data using open-source OSINT tools designed for... |
|
Experimental |
| 52 |
AraLeo5/Semantic-Privacy-Guard
Identify and protect personal data in text by intercepting and masking PII... |
|
Experimental |
| 53 |
rb81/prompt-hacking-classifier
A flexible and portable solution that uses a single robust prompt and... |
|
Experimental |
| 54 |
AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection
Lakera Gandalf AI challenge's step by step walkthrough, showcasing... |
|
Experimental |
| 55 |
Unknown-2829/llm-prompt-engineering
A collection of prompt engineering and red-teaming experiments with large... |
|
Experimental |
| 56 |
promptinjection/promptinjection.github.io
Contributed by Community |
|
Experimental |
| 57 |
Eulex0x/cleanmyprompt
A transparent, local-only tool to sanitize sensitive info for AI. |
|
Experimental |
| 58 |
amk9978/Guardian
The LLM guardian kernel |
|
Experimental |
| 59 |
yksanjo/promptshield
🛡️ AI prompt security and validation tool to protect against prompt injection attacks |
|
Experimental |
| 60 |
tuxsharxsec/Jailbreaks
A repo for all the jailbreaks |
|
Experimental |
| 61 |
Ethan-YS/PromptGuard-for-Agents
🛡️ Universal AI defense framework protecting agents from prompt injection... |
|
Experimental |
| 62 |
grasses/PoisonPrompt
Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language... |
|
Experimental |
| 63 |
KazKozDev/system-prompt-benchmark
Test your LLM system prompts against 287 real-world attack vectors including... |
|
Experimental |
| 64 |
AiShieldsOrg/AiShieldsWeb
AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer |
|
Experimental |
| 65 |
sruzima/safe-gamer-helper-chatbot
System prompt for SafeGamer Helper, an AI chatbot that teaches kids online... |
|
Experimental |
| 66 |
successfulstudy/jailbreakprompt
Compile a list of AI jailbreak scenarios for enthusiasts to explore and test. |
|
Experimental |
| 67 |
promptslab/LLM-Prompt-Vulnerabilities
Prompts Methods to find the vulnerabilities in Generative Models |
|
Experimental |
| 68 |
anuraag-khare/prompt-fence
A Python SDK (backed by Rust) for establishing cryptographic security... |
|
Experimental |
| 69 |
ianreboot/safeprompt
Protect AI automations from prompt injection attacks. One API call stops... |
|
Experimental |
| 70 |
apologetik/CyberPrompts
A collection of Large Language Model (LLM) prompts helpful for various... |
|
Experimental |
| 71 |
anishrajpandey/Prompt_Injection_Detector
A lightweight web tool to detect prompt injection in AI inputs. Helps... |
|
Experimental |
| 72 |
asif-hanif/baple
[MICCAI 2024] Official code repository of paper titled "BAPLe: Backdoor... |
|
Experimental |
| 73 |
liangzid/PromptExtractionEval
Source code of the paper "Why Are My Prompts Leaked? Unraveling Prompt... |
|
Experimental |
| 74 |
5ynthaire/5YN-LiveWebpageScanPrecision-Prompt
Prompt forces direct, real-time retrieval of unaltered text from URLs with... |
|
Experimental |
| 75 |
IAHASH/iahash
IA-HASH: A simple, universal way to verify that an AI truly generated a... |
|
Experimental |
| 76 |
astecka-m/AgentGuard
Protect AI agents by detecting and blocking prompt, command injection,... |
|
Experimental |
| 77 |
SafellmHub/hguard-go
Guardrails for LLMs: detect and block hallucinated tool calls to improve... |
|
Experimental |
| 78 |
obscuralabs-AI/Symbolic-Prompt-PenTest
Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs. |
|
Experimental |
| 79 |
pastsafe-ext/pastesafe
Chrome extension that prevents leaking API keys and sensitive data into AI chats |
|
Experimental |
| 80 |
alexandrughinea/prompt-chainmail-ts
Security middleware that shields AI applications from prompt injection,... |
|
Experimental |
| 81 |
Kimosabey/sentinel-layer
AI Safety, Governance, and Security Layer featuring advanced Prompt... |
|
Experimental |
| 82 |
yeraydoblasbueno/llm-security-framework
Testing LLM vulnerabilities (Jailbreaks, Prompt Injections) locally using... |
|
Experimental |
| 83 |
khaal10460/sentinel-trace
Full-stack AI data ingestion pipeline with real-time adversarial filtering,... |
|
Experimental |
| 84 |
ndpvt-web/aristotelian-compliance-test
When Aristotle gets a LinkedIn account and starts red-teaming LLMs.... |
|
Experimental |
| 85 |
jyotisin/secure-llm-gateway
Secure large language model access by enforcing role-based controls,... |
|
Experimental |
| 86 |
bhargavi852004/Safe-Scope
Safe Scope is a real-time, explainable AI platform that monitors online... |
|
Experimental |
| 87 |
sachnaror/prompt-guardrails-engine
Production-grade FastAPI microservice that forces LLMs to behave.... |
|
Experimental |
| 88 |
bcdannyboy/PromptMatryoshka
Multi-Provider LLM Jailbreak Research Framework |
|
Experimental |
| 89 |
Pro-GenAI/Smart-Prompt-Eval
Evaluating LLM Robustness with Manipulated Prompts |
|
Experimental |
| 90 |
valentinaschiavon99/promptguard
PromptGuard · LLM Prompt Risk Analyzer · Project for "Neuere Methoden in der... |
|
Experimental |
| 91 |
thatgeeman/prompt-injection-cv
PoC for prompt injection attacks on LLMs in recruitment. Tests Gemini's... |
|
Experimental |
| 92 |
thepratikguptaa/prompt-injection
This repository serves as a comprehensive resource for understanding and... |
|
Experimental |
| 93 |
Tarunjit45/PromptGuard
PromptGuard is a pragmatic, opinionated framework for establishing... |
|
Experimental |
| 94 |
Mousewarriors/Cybersecurity-Portfolio
I built and documented hands-on cybersecurity projects focused on SOC... |
|
Experimental |
| 95 |
coollane925/AI-FUNDAMENTALS-AND-PROBING
This is a beginner-intermediate level report for people who are interested... |
|
Experimental |
| 96 |
SolsticeMoon/Spectre_Steganography_System
An experiment in LLM-Assisted steganography using zero-width text. |
|
Experimental |
| 97 |
best247team1-cloud/Ai-shield-pro
AI Shield Pro: A secure privacy tool to redact sensitive data and engineer... |
|
Experimental |
| 98 |
PMQ9/Ordo-Maledictum-Promptorum
Researching a system for preventing prompt injection by separating user... |
|
Experimental |
| 99 |
wmjg-alt/ai_security_
demo of an ai security failure, prompt injection |
|
Experimental |
| 100 |
yogeshwankhede007/WebSec-AI
WebSec-AI: A toolkit that combines AI and cybersecurity techniques to detect... |
|
Experimental |
| 101 |
seamus-brady/promptbouncer
A prototype defense against prompt-based attacks with real-time threat assessment. |
|
Experimental |
| 102 |
PrithikaGopinath/DataGuardian-AI-Privacy-Coach
AI-powered privacy coach with risk detection, scenario analysis, and... |
|
Experimental |
| 103 |
gkanellopoulos/prompthorizon
Python library that enables developers to anonymize JSON objects by creating... |
|
Experimental |
| 104 |
vladutdinu/prompty-api
PromptyAPI, people's LLM-based applications security layer |
|
Experimental |
| 105 |
nodite/llm-guard-ts
The Security Toolkit for LLM Interactions (TS version) |
|
Experimental |