MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local models, API-based safeguards, and LangChain guardrails.
Detects prompt injections via transformer-based models (DeBERTa, DistilBERT, ONNX) with local inference, or integrates with Groq's hosted safeguard models for API-based detection. Provides LangChain LCEL-compatible guardrail runnables that block unsafe inputs before LLM execution, plus customizable keyword-based filtering for input/output layers. Designed as a rapid security supplementation layer for development, self-hosted deployments, and foundation model enhancement rather than standalone production protection.
38 stars and 3,076 monthly downloads. Used by 1 other package. Available on PyPI.
Stars
38
Forks
22
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 14, 2026
Monthly downloads
3,076
Commits (30d)
0
Dependencies
3
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/MaxMLang/pytector"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
protectai/llm-guard
The Security Toolkit for LLM Interactions
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
utkusen/promptmap
a security scanner for custom LLM applications
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt
Resk-Security/Resk-LLM
Resk is a robust Python library designed to enhance security and manage context when...