Prompt Injection Security Prompt Engineering Tools

Tools for detecting, testing, and defending against prompt injection attacks, jailbreaks, and adversarial prompts targeting LLMs. Does NOT include general LLM security, data poisoning defenses unrelated to prompts, or prompt engineering best practices.

There are 105 prompt injection security tools tracked. 1 score above 70 (verified tier). The highest-rated is protectai/llm-guard at 74/100 with 2,660 stars and 329,796 monthly downloads.

Get all 105 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=prompt-engineering&subcategory=prompt-injection-security&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 protectai/llm-guard

The Security Toolkit for LLM Interactions

74
Verified
2 MaxMLang/pytector

Easy to use LLM Prompt Injection Detection / Detector Python Package with...

63
Established
3 agencyenterprise/PromptInject

PromptInject is a framework that assembles prompts in a modular fashion to...

48
Emerging
4 Resk-Security/Resk-LLM

Resk is a robust Python library designed to enhance security and manage...

47
Emerging
5 utkusen/promptmap

a security scanner for custom LLM applications

44
Emerging
6 Dicklesworthstone/acip

The Advanced Cognitive Inoculation Prompt

43
Emerging
7 TrustAI-laboratory/Learn-Prompt-Hacking

This is The most comprehensive prompt hacking course available, which record...

38
Emerging
8 protectai/rebuff

LLM Prompt Injection Detector

38
Emerging
9 jailbreakme-xyz/jailbreak

jailbreakme.xyz is an open-source decentralized app (dApp) where users are...

38
Emerging
10 SemanticBrainCorp/SemanticShield

The Security Toolkit for managing Generative AI(especially LLMs) and...

37
Emerging
11 Hellsender01/prompt-injection-taxonomy

A structured reference covering 253 prompt injection techniques across 17...

37
Emerging
12 LostOxygen/llm-confidentiality

Whispers in the Machine: Confidentiality in Agentic Systems

36
Emerging
13 Repello-AI/whistleblower

Whistleblower is a offensive security tool for testing against system prompt...

36
Emerging
14 MindfulwareDev/PromptProof

Plug-and-play guardrail prompts for any LLM — injection defense,...

35
Emerging
15 Code-and-Sorts/PromptDrifter

🧭 PromptDrifter – one‑command CI guardrail that catches prompt drift and...

34
Emerging
16 alphasecio/prompt-guard

A web app for testing Prompt Guard, a classifier model by Meta for detecting...

34
Emerging
17 yunwei37/prompt-hacker-collections

prompt attack-defense, prompt Injection, reverse engineering notes and...

34
Emerging
18 Xayan/Rules.txt

A rationalist ruleset for "debugging" LLMs, auditing their internal...

34
Emerging
19 cysecbench/dataset

Generative AI-based CyberSecurity-focused Prompt Dataset for Benchmarking...

33
Emerging
20 trinib/ZORG-Jailbreak-Prompt-Text

Bypass restricted and censored content on AI chat prompts 😈

32
Emerging
21 genia-dev/vibraniumdome

LLM Security Platform.

31
Emerging
22 CyberAlbSecOP/MINOTAUR_Impossible_GPT_Security_Challenge

MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge,...

31
Emerging
23 takashiishida/cleanprompt

Anonymize sensitive information in text prompts before sending them to LLM...

31
Emerging
24 Arash-Mansourpour/Breaking-LLaMA-Limitations-for-DAN

An educational and research-based exploration into breaking the limitations...

31
Emerging
25 user1342/Folly

Open-source LLM Prompt-Injection and Jailbreaking Playground

31
Emerging
26 akazah/prompt-anonymizer

Anonymize / mask personal information before sending prompts to chat AI...

29
Experimental
27 M507/HackMeGPT

Vulnerable LLM Application

29
Experimental
28 Addy-shetty/Pitt

PITT is an open‑source, OWASP‑aligned LLM security scanner that detects...

28
Experimental
29 forcesunseen/llm-hackers-handbook

A guide to LLM hacking: fundamentals, prompt injection, offense, and defense

27
Experimental
30 hugobatista/unicode-injection

Proof of concept demonstrating Unicode injection vulnerabilities using...

27
Experimental
31 LLMPID/LLMPID-AS

LLM Prompt Injection Detection API Service PoC.

27
Experimental
32 HumanCompatibleAI/tensor-trust

A prompt injection game to collect data for robust ML research

27
Experimental
33 2alf/prmptinj

Curated + custom prompt injections.

26
Experimental
34 langguard/langguard-python

LangGuard Python Library

26
Experimental
35 arekusandr/last_layer

Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️

26
Experimental
36 davidegat/happy-prompts

Utterly unelegant prompts for local LLMs, with scary results.

25
Experimental
37 kennethleungty/ARTKIT-Gandalf-Challenge

Exposing Jailbreak Vulnerabilities in LLM Applications with ARTKIT

25
Experimental
38 BlackTechX011/HacxGPT-Jailbreak-prompts

HacxGPT Jailbreak 🚀: Unlock the full potential of top AI models like...

25
Experimental
39 crodjer/biip

Strip out PII before Sending Data

25
Experimental
40 jagan-raj-r/appsec-prompt-cheatsheet

A curated collection of high-quality prompts to help AppSec engineers use...

24
Experimental
41 LoonMORTI/promptshield

🛡️ Protect LLM applications with PromptShields, a robust security framework...

24
Experimental
42 promptshieldhq/promptshield-engine

Detection and anonymization microservice for the PromptShield stack.

24
Experimental
43 AmanPriyanshu/FRACTURED-SORRY-Bench-Automated-Multishot-Jailbreaking

FRACTURED-SORRY-Bench: This repository contains the code and data for the...

24
Experimental
44 SurceBeats/GhostInk

Emoji steganography tool that hides secret text inside emojis using Unicode...

24
Experimental
45 Sushegaad/Semantic-Privacy-Guard

Semantic Privacy Guard: A Java middleware that intercepts text, identifies...

23
Experimental
46 TechJackSolutions/GAIO

Open-source guardrail standard for reducing AI fabrication and improving...

23
Experimental
47 deepanshu-maliyan/guardrails-for-ai-coders

Security prompts and checklists for AI coding assistants. One command...

23
Experimental
48 yangyihe0305-droid/llm-red-team-research

Systematic exploration of LLM alignment boundaries through logical stress testing

23
Experimental
49 tamadip007/getSPNless

🔍 Obtain Kerberos service tickets effortlessly using the SPN-less technique...

22
Experimental
50 Georgeyoussef066/promptshield

🛡️ Secure your LLM applications with PromptShields, a framework designed for...

22
Experimental
51 ajaakevin/HACKME

Explore and analyze WhatsApp data using open-source OSINT tools designed for...

22
Experimental
52 AraLeo5/Semantic-Privacy-Guard

Identify and protect personal data in text by intercepting and masking PII...

22
Experimental
53 rb81/prompt-hacking-classifier

A flexible and portable solution that uses a single robust prompt and...

22
Experimental
54 AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection

Lakera Gandalf AI challenge's step by step walkthrough, showcasing...

22
Experimental
55 Unknown-2829/llm-prompt-engineering

A collection of prompt engineering and red-teaming experiments with large...

21
Experimental
56 promptinjection/promptinjection.github.io

Contributed by Community

21
Experimental
57 Eulex0x/cleanmyprompt

A transparent, local-only tool to sanitize sensitive info for AI.

21
Experimental
58 amk9978/Guardian

The LLM guardian kernel

20
Experimental
59 yksanjo/promptshield

🛡️ AI prompt security and validation tool to protect against prompt injection attacks

20
Experimental
60 tuxsharxsec/Jailbreaks

A repo for all the jailbreaks

20
Experimental
61 Ethan-YS/PromptGuard-for-Agents

🛡️ Universal AI defense framework protecting agents from prompt injection...

20
Experimental
62 grasses/PoisonPrompt

Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language...

20
Experimental
63 KazKozDev/system-prompt-benchmark

Test your LLM system prompts against 287 real-world attack vectors including...

20
Experimental
64 AiShieldsOrg/AiShieldsWeb

AiShields is an open-source Artificial Intelligence Data Input and Output Sanitizer

19
Experimental
65 sruzima/safe-gamer-helper-chatbot

System prompt for SafeGamer Helper, an AI chatbot that teaches kids online...

19
Experimental
66 successfulstudy/jailbreakprompt

Compile a list of AI jailbreak scenarios for enthusiasts to explore and test.

19
Experimental
67 promptslab/LLM-Prompt-Vulnerabilities

Prompts Methods to find the vulnerabilities in Generative Models

19
Experimental
68 anuraag-khare/prompt-fence

A Python SDK (backed by Rust) for establishing cryptographic security...

19
Experimental
69 ianreboot/safeprompt

Protect AI automations from prompt injection attacks. One API call stops...

19
Experimental
70 apologetik/CyberPrompts

A collection of Large Language Model (LLM) prompts helpful for various...

19
Experimental
71 anishrajpandey/Prompt_Injection_Detector

A lightweight web tool to detect prompt injection in AI inputs. Helps...

17
Experimental
72 asif-hanif/baple

[MICCAI 2024] Official code repository of paper titled "BAPLe: Backdoor...

17
Experimental
73 liangzid/PromptExtractionEval

Source code of the paper "Why Are My Prompts Leaked? Unraveling Prompt...

16
Experimental
74 5ynthaire/5YN-LiveWebpageScanPrecision-Prompt

Prompt forces direct, real-time retrieval of unaltered text from URLs with...

16
Experimental
75 IAHASH/iahash

IA-HASH: A simple, universal way to verify that an AI truly generated a...

16
Experimental
76 astecka-m/AgentGuard

Protect AI agents by detecting and blocking prompt, command injection,...

15
Experimental
77 SafellmHub/hguard-go

Guardrails for LLMs: detect and block hallucinated tool calls to improve...

15
Experimental
78 obscuralabs-AI/Symbolic-Prompt-PenTest

Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs.

15
Experimental
79 pastsafe-ext/pastesafe

Chrome extension that prevents leaking API keys and sensitive data into AI chats

14
Experimental
80 alexandrughinea/prompt-chainmail-ts

Security middleware that shields AI applications from prompt injection,...

14
Experimental
81 Kimosabey/sentinel-layer

AI Safety, Governance, and Security Layer featuring advanced Prompt...

14
Experimental
82 yeraydoblasbueno/llm-security-framework

Testing LLM vulnerabilities (Jailbreaks, Prompt Injections) locally using...

14
Experimental
83 khaal10460/sentinel-trace

Full-stack AI data ingestion pipeline with real-time adversarial filtering,...

14
Experimental
84 ndpvt-web/aristotelian-compliance-test

When Aristotle gets a LinkedIn account and starts red-teaming LLMs....

14
Experimental
85 jyotisin/secure-llm-gateway

Secure large language model access by enforcing role-based controls,...

14
Experimental
86 bhargavi852004/Safe-Scope

Safe Scope is a real-time, explainable AI platform that monitors online...

14
Experimental
87 sachnaror/prompt-guardrails-engine

Production-grade FastAPI microservice that forces LLMs to behave....

14
Experimental
88 bcdannyboy/PromptMatryoshka

Multi-Provider LLM Jailbreak Research Framework

14
Experimental
89 Pro-GenAI/Smart-Prompt-Eval

Evaluating LLM Robustness with Manipulated Prompts

14
Experimental
90 valentinaschiavon99/promptguard

PromptGuard · LLM Prompt Risk Analyzer · Project for "Neuere Methoden in der...

13
Experimental
91 thatgeeman/prompt-injection-cv

PoC for prompt injection attacks on LLMs in recruitment. Tests Gemini's...

13
Experimental
92 thepratikguptaa/prompt-injection

This repository serves as a comprehensive resource for understanding and...

12
Experimental
93 Tarunjit45/PromptGuard

PromptGuard is a pragmatic, opinionated framework for establishing...

12
Experimental
94 Mousewarriors/Cybersecurity-Portfolio

I built and documented hands-on cybersecurity projects focused on SOC...

12
Experimental
95 coollane925/AI-FUNDAMENTALS-AND-PROBING

This is a beginner-intermediate level report for people who are interested...

11
Experimental
96 SolsticeMoon/Spectre_Steganography_System

An experiment in LLM-Assisted steganography using zero-width text.

11
Experimental
97 best247team1-cloud/Ai-shield-pro

AI Shield Pro: A secure privacy tool to redact sensitive data and engineer...

11
Experimental
98 PMQ9/Ordo-Maledictum-Promptorum

Researching a system for preventing prompt injection by separating user...

11
Experimental
99 wmjg-alt/ai_security_

demo of an ai security failure, prompt injection

11
Experimental
100 yogeshwankhede007/WebSec-AI

WebSec-AI: A toolkit that combines AI and cybersecurity techniques to detect...

11
Experimental
101 seamus-brady/promptbouncer

A prototype defense against prompt-based attacks with real-time threat assessment.

11
Experimental
102 PrithikaGopinath/DataGuardian-AI-Privacy-Coach

AI-powered privacy coach with risk detection, scenario analysis, and...

11
Experimental
103 gkanellopoulos/prompthorizon

Python library that enables developers to anonymize JSON objects by creating...

10
Experimental
104 vladutdinu/prompty-api

PromptyAPI, people's LLM-based applications security layer

10
Experimental
105 nodite/llm-guard-ts

The Security Toolkit for LLM Interactions (TS version)

10
Experimental