llm-guard and llm-confidentiality
One tool provides a comprehensive security toolkit for LLM interactions, while the other focuses specifically on ensuring confidentiality within agentic systems, suggesting they could be complementary, with the latter providing a specialized component or strategy that could integrate into the broader security framework of the former.
About llm-guard
protectai/llm-guard
The Security Toolkit for LLM Interactions
Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.
About llm-confidentiality
LostOxygen/llm-confidentiality
Whispers in the Machine: Confidentiality in Agentic Systems
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work