llm-platform-security/SecGPT
An Execution Isolation Architecture for LLM-Based Agentic Systems
Isolates LLM-based agents via separate processes with seccomp/setrlimit sandboxing, Redis-backed memory, and permission-gated inter-process communication to defend against app compromise, data theft, and uncontrolled system alteration. Built on LlamaIndex and LangChain with extensible tool architecture; includes baseline VanillaGPT for comparative analysis and case studies demonstrating attack prevention.
107 stars. No commits in the last 6 months.
Stars
107
Forks
12
Language
Python
License
—
Category
Last pushed
Jan 31, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/llm-platform-security/SecGPT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
mattijsmoens/sovereign-shield
AI security framework: tamper-proof action auditing, prompt injection firewall, ethical...
vstorm-co/pydantic-ai-middleware
Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...