protectai/llm-guard

The Security Toolkit for LLM Interactions

74
/ 100
Verified

Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.

2,660 stars and 329,796 monthly downloads. Used by 1 other package. Available on PyPI.

Maintenance 6 / 25
Adoption 21 / 25
Maturity 25 / 25
Community 22 / 25

How are scores calculated?

Stars

2,660

Forks

353

Language

Python

License

MIT

Last pushed

Dec 15, 2025

Monthly downloads

329,796

Commits (30d)

0

Dependencies

12

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/protectai/llm-guard"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.