pytector and PromptProof
These are complementary tools: pytector provides detection/blocking of prompt injections through ML-based classification, while PromptProof provides defensive prompting strategies that proactively harden LLM behavior against injection attacks and other vulnerabilities.
About pytector
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local models, API-based safeguards, and LangChain guardrails.
Detects prompt injections via transformer-based models (DeBERTa, DistilBERT, ONNX) with local inference, or integrates with Groq's hosted safeguard models for API-based detection. Provides LangChain LCEL-compatible guardrail runnables that block unsafe inputs before LLM execution, plus customizable keyword-based filtering for input/output layers. Designed as a rapid security supplementation layer for development, self-hosted deployments, and foundation model enhancement rather than standalone production protection.
About PromptProof
MindfulwareDev/PromptProof
Plug-and-play guardrail prompts for any LLM — injection defense, hallucination control, bias detection, PII protection, and 50+ more. Includes CLI tools, adversarial test suite, and integration templates for OpenAI, Anthropic, Ollama, LangChain & LlamaIndex.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work