pytector and PromptProof

These are complementary tools: pytector provides detection/blocking of prompt injections through ML-based classification, while PromptProof provides defensive prompting strategies that proactively harden LLM behavior against injection attacks and other vulnerabilities.

pytector
70
Verified
PromptProof
35
Emerging
Maintenance 10/25
Adoption 16/25
Maturity 25/25
Community 19/25
Maintenance 13/25
Adoption 1/25
Maturity 9/25
Community 12/25
Stars: 38
Forks: 22
Downloads: 3,076
Commits (30d): 0
Language: Python
License: Apache-2.0
Stars: 1
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License: GPL-3.0
No risk flags
No Package No Dependents

About pytector

MaxMLang/pytector

Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local models, API-based safeguards, and LangChain guardrails.

Detects prompt injections via transformer-based models (DeBERTa, DistilBERT, ONNX) with local inference, or integrates with Groq's hosted safeguard models for API-based detection. Provides LangChain LCEL-compatible guardrail runnables that block unsafe inputs before LLM execution, plus customizable keyword-based filtering for input/output layers. Designed as a rapid security supplementation layer for development, self-hosted deployments, and foundation model enhancement rather than standalone production protection.

About PromptProof

MindfulwareDev/PromptProof

Plug-and-play guardrail prompts for any LLM — injection defense, hallucination control, bias detection, PII protection, and 50+ more. Includes CLI tools, adversarial test suite, and integration templates for OpenAI, Anthropic, Ollama, LangChain & LlamaIndex.

Scores updated daily from GitHub, PyPI, and npm data. How scores work