llm-guard and promptmap
One is a runtime security toolkit that actively defends against prompt injection, while the other is a penetration testing tool for identifying prompt injection vulnerabilities in LLM applications.
About llm-guard
protectai/llm-guard
The Security Toolkit for LLM Interactions
Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.
About promptmap
utkusen/promptmap
a security scanner for custom LLM applications
Employs a dual-LLM architecture where a controller model evaluates whether attack payloads successfully compromise the target application, enabling both white-box testing (direct model access with system prompts) and black-box testing (HTTP endpoints). Includes 50+ pre-built YAML-configurable rules across prompt stealing, jailbreaking, and bias categories, with support for OpenAI, Anthropic, Google, XAI, and local Ollama models.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work