llm-guard and PromptProof
About llm-guard
protectai/llm-guard
The Security Toolkit for LLM Interactions
Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.
About PromptProof
MindfulwareDev/PromptProof
Plug-and-play guardrail prompts for any LLM — injection defense, hallucination control, bias detection, PII protection, and 50+ more. Includes CLI tools, adversarial test suite, and integration templates for OpenAI, Anthropic, Ollama, LangChain & LlamaIndex.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work