llm-guard and pytector

These two tools are competitors, with LLM-Guard offering a more comprehensive security toolkit for LLM interactions and Pytector focusing specifically on prompt injection detection, potentially as a lighter-weight alternative or for users prioritizing local model support.

llm-guard
74
Verified
pytector
70
Verified
Maintenance 6/25
Adoption 21/25
Maturity 25/25
Community 22/25
Maintenance 10/25
Adoption 16/25
Maturity 25/25
Community 19/25
Stars: 2,660
Forks: 353
Downloads: 329,796
Commits (30d): 0
Language: Python
License: MIT
Stars: 38
Forks: 22
Downloads: 3,076
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
No risk flags

About llm-guard

protectai/llm-guard

The Security Toolkit for LLM Interactions

Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.

About pytector

MaxMLang/pytector

Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local models, API-based safeguards, and LangChain guardrails.

Detects prompt injections via transformer-based models (DeBERTa, DistilBERT, ONNX) with local inference, or integrates with Groq's hosted safeguard models for API-based detection. Provides LangChain LCEL-compatible guardrail runnables that block unsafe inputs before LLM execution, plus customizable keyword-based filtering for input/output layers. Designed as a rapid security supplementation layer for development, self-hosted deployments, and foundation model enhancement rather than standalone production protection.

Scores updated daily from GitHub, PyPI, and npm data. How scores work