llm-guard and llm-confidentiality

One tool provides a comprehensive security toolkit for LLM interactions, while the other focuses specifically on ensuring confidentiality within agentic systems, suggesting they could be complementary, with the latter providing a specialized component or strategy that could integrate into the broader security framework of the former.

llm-guard
74
Verified
llm-confidentiality
43
Emerging
Maintenance 6/25
Adoption 21/25
Maturity 25/25
Community 22/25
Maintenance 6/25
Adoption 8/25
Maturity 16/25
Community 13/25
Stars: 2,660
Forks: 353
Downloads: 329,796
Commits (30d): 0
Language: Python
License: MIT
Stars: 42
Forks: 6
Downloads:
Commits (30d): 0
Language: Python
License: Apache-2.0
No risk flags
No Package No Dependents

About llm-guard

protectai/llm-guard

The Security Toolkit for LLM Interactions

Provides modular input and output scanners for LLM pipelines—including prompt injection detection, secret redaction, toxicity analysis, and factual consistency checking—deployable as a Python library or standalone API. Uses a composable scanner architecture enabling fine-grained control over which security checks run on user inputs and model outputs. Integrates with OpenAI's API and other LLM providers through straightforward configuration.

About llm-confidentiality

LostOxygen/llm-confidentiality

Whispers in the Machine: Confidentiality in Agentic Systems

Scores updated daily from GitHub, PyPI, and npm data. How scores work