MaxMLang/pytector

Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local models, API-based safeguards, and LangChain guardrails.

70
/ 100
Verified

Detects prompt injections via transformer-based models (DeBERTa, DistilBERT, ONNX) with local inference, or integrates with Groq's hosted safeguard models for API-based detection. Provides LangChain LCEL-compatible guardrail runnables that block unsafe inputs before LLM execution, plus customizable keyword-based filtering for input/output layers. Designed as a rapid security supplementation layer for development, self-hosted deployments, and foundation model enhancement rather than standalone production protection.

38 stars and 3,076 monthly downloads. Used by 1 other package. Available on PyPI.

Maintenance 10 / 25
Adoption 16 / 25
Maturity 25 / 25
Community 19 / 25

How are scores calculated?

Stars

38

Forks

22

Language

Python

License

Apache-2.0

Last pushed

Feb 14, 2026

Monthly downloads

3,076

Commits (30d)

0

Dependencies

3

Reverse dependents

1

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/MaxMLang/pytector"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.