ZenGuard-AI/fast-llm-security-guardrails
The fastest Trust Layer for AI Agents
Provides modular detectors for prompt injection, jailbreak, PII, and topic/keyword filtering as runtime guardrails for LLM applications. Integrates with LangChain and LlamaIndex frameworks, with specialized support for Salesforce Agentforce deployments. Offers tiered infrastructure (free BASE tier with rate limits, enterprise DEDICATED tier for high-throughput scenarios) via Python SDK.
152 stars.
Stars
152
Forks
21
Language
Python
License
MIT
Category
Last pushed
Feb 03, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/ZenGuard-AI/fast-llm-security-guardrails"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
vstorm-co/pydantic-ai-middleware
Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle...
mattijsmoens/sovereign-shield
AI security framework: tamper-proof action auditing, prompt injection firewall, ethical...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...