llm-guard and SemanticShield
These are competitors offering overlapping prompt injection and LLM security capabilities, though llm-guard dominates with significantly broader adoption and a more mature ecosystem of detectors and sanitizers.
About llm-guard
protectai/llm-guard
The Security Toolkit for LLM Interactions
Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.
About SemanticShield
SemanticBrainCorp/SemanticShield
The Security Toolkit for managing Generative AI(especially LLMs) and Supervised Learning processes(Learning and Inference).
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work