project-codeguard/rules
Project CodeGuard is an AI model-agnostic security framework and ruleset that embeds secure-by-default practices into AI coding workflows (generation and review). It ships core security rules, translators for popular coding agents, and validators to test rule compliance.
The framework uses a declarative rule format with pluggable validators that execute security checks against code generation outputs, supporting integration with Claude, GitHub Copilot, and other LLM-based coding assistants through standardized translators. Rules cover OWASP Top 10 vulnerabilities, dependency scanning, and cryptographic misuse patterns, with results aggregatable across multiple AI agents in CI/CD pipelines. Architecture separates rule definitions from enforcement logic, enabling organizations to customize security policies without modifying core validators.
394 stars.
Stars
394
Forks
51
Language
Python
License
—
Category
Last pushed
Jan 29, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/project-codeguard/rules"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Related agents
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
vstorm-co/pydantic-ai-middleware
Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle...
mattijsmoens/sovereign-shield
AI security framework: tamper-proof action auditing, prompt injection firewall, ethical...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...