kahalewai/agent-context-guard
Control Plane Integrity Tool for AI Agents. Cryptographically seal, verify, and audit the markdown files that control your AI Agents.
Implements SHA-256 hashing and HMAC-SHA256 signatures to cryptographically seal markdown context files, with runtime verification enforced through a library API (`guard.read()`) that agents must call rather than directly accessing files. Provides framework-agnostic integration through adapters for LangChain, CrewAI, OpenAI, Anthropic, AutoGen, LlamaIndex, MCP, and OpenClaw, alongside a deterministic proposal workflow where agents suggest changes but only humans can approve them, with all operations logged to an append-only audit trail.
Available on PyPI.
Stars
6
Forks
3
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 18, 2026
Monthly downloads
61
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/kahalewai/agent-context-guard"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
mattijsmoens/sovereign-shield
AI security framework: tamper-proof action auditing, prompt injection firewall, ethical...
vstorm-co/pydantic-ai-middleware
Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle...
Dicklesworthstone/destructive_command_guard
The Destructive Command Guard (dcg) is for blocking dangerous git and shell commands from being...