dronefreak/PromptScreen
Protect your LLMs from prompt injection and jailbreak attacks. Easy-to-use Python package with multiple detection methods, CLI tool, and FastAPI integration.
Available on PyPI.
Stars
9
Forks
4
Language
Python
License
Apache-2.0
Category
Last pushed
Jan 04, 2026
Commits (30d)
0
Dependencies
8
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/dronefreak/PromptScreen"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
anmolksachan/LLMInjector
Burp Suite Extension for LLM Prompt Injection Testing
moketchups/permanently-jailbroken
We asked 6 AIs about their own programming. All 6 said jailbreaking will never be fixed. Run it...
AhsanAyub/malicious-prompt-detection
Detection of malicious prompts used to exploit large language models (LLMs) by leveraging...
AdityaBhatt3010/When-LinkedIn-Gmail-Obey-Hidden-AI-Prompts-Lessons-in-Indirect-Prompt-Injection
A real-world look at how hidden instructions in profiles and emails trick AI into unexpected...