llm-guard and promptmap

One is a runtime security toolkit that actively defends against prompt injection, while the other is a penetration testing tool for identifying prompt injection vulnerabilities in LLM applications.

llm-guard
74
Verified
promptmap
51
Established
Maintenance 6/25
Adoption 21/25
Maturity 25/25
Community 22/25
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 19/25
Stars: 2,660
Forks: 353
Downloads: 329,796
Commits (30d): 0
Language: Python
License: MIT
Stars: 1,146
Forks: 120
Downloads:
Commits (30d): 0
Language: Python
License: GPL-3.0
No risk flags
No Package No Dependents

About llm-guard

protectai/llm-guard

The Security Toolkit for LLM Interactions

Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.

About promptmap

utkusen/promptmap

a security scanner for custom LLM applications

Employs a dual-LLM architecture where a controller model evaluates whether attack payloads successfully compromise the target application, enabling both white-box testing (direct model access with system prompts) and black-box testing (HTTP endpoints). Includes 50+ pre-built YAML-configurable rules across prompt stealing, jailbreaking, and bias categories, with support for OpenAI, Anthropic, Google, XAI, and local Ollama models.

Scores updated daily from GitHub, PyPI, and npm data. How scores work