prompt-guard and prompt-shield
About prompt-guard
seojoonkim/prompt-guard
Advanced prompt injection defense system for AI agents. Multi-language detection, severity scoring, and security auditing.
This project helps protect your AI agents and large language model (LLM) applications from being manipulated or leaking sensitive information. It takes user input or AI-generated responses and identifies attempts to bypass safety rules or extract confidential data like API keys. Security engineers, AI product managers, or anyone deploying an AI assistant would use this to ensure their AI behaves as intended and doesn't reveal secrets.
About prompt-shield
LuciferForge/prompt-shield
Lightweight prompt injection detector. 22 attack patterns. Blocks jailbreaks before they reach your model.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work