llm-guard and PromptProof

llm-guard
74
Verified
PromptProof
35
Emerging
Maintenance 6/25
Adoption 21/25
Maturity 25/25
Community 22/25
Maintenance 13/25
Adoption 1/25
Maturity 9/25
Community 12/25
Stars: 2,660
Forks: 353
Downloads: 329,796
Commits (30d): 0
Language: Python
License: MIT
Stars: 1
Forks: 1
Downloads:
Commits (30d): 0
Language: Python
License: GPL-3.0
No risk flags
No Package No Dependents

About llm-guard

protectai/llm-guard

The Security Toolkit for LLM Interactions

Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.

About PromptProof

MindfulwareDev/PromptProof

Plug-and-play guardrail prompts for any LLM — injection defense, hallucination control, bias detection, PII protection, and 50+ more. Includes CLI tools, adversarial test suite, and integration templates for OpenAI, Anthropic, Ollama, LangChain & LlamaIndex.

Scores updated daily from GitHub, PyPI, and npm data. How scores work