ankitlade12/AgentArmor
The full-stack safety layer for AI agents. Budget limits, prompt injection shields, PII filtering, output firewalls, and hooks — in 2 lines of code.
Available on PyPI.
Stars
2
Forks
—
Language
Python
License
MIT
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/ankitlade12/AgentArmor"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
superagent-ai/superagent
Superagent protects your AI applications against prompt injections, data leaks, and harmful...
hexitlabs/vigil
🛡️ Open-source safety guardrail for AI agent tool calls. <2ms, zero dependencies.
mguard-ai/mguard
Memory defense for AI agents — stops MINJA, AgentPoison, and MemoryGraft attacks. Zero dependencies.
Jitera-Labs/openguard
Safety proxy for your AI Agents
WardLink/TrustLayer--Security-Control-Plane-For-LLM-AI
TrustLayer is an API-first security control plane for LLM apps and AI agents. It protects...