protectai/llm-guard
The Security Toolkit for LLM Interactions
Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.
2,660 stars and 329,796 monthly downloads. Used by 1 other package. Available on PyPI.
Stars
2,660
Forks
353
Language
Python
License
MIT
Category
Last pushed
Dec 15, 2025
Monthly downloads
329,796
Commits (30d)
0
Dependencies
12
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/protectai/llm-guard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related tools
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...
utkusen/promptmap
a security scanner for custom LLM applications
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt