KevinRabun/judges
MCP server with specialized judges to evaluate AI-generated code for security, cost, scalability, cloud readiness, and best practices.
Combines **deterministic pattern matching with AST analysis** (offline, zero LLM calls) and **LLM-powered deep-review prompts** across 45 specialized domains, plus **200+ auto-fix patches** for instant remediation. Ships as an **MCP server** (stdio-based for Claude Desktop, Cursor, VS Code), **CLI tool**, and **programmatic API**, with GitHub Action integration and SARIF output for Code Scanning. Operates as an independent quality gate for AI-generated code, providing **confidence scoring, policy profiles, and plain-language risk summaries** without replacing linters—covering authentication strategy, data sovereignty, cost patterns, and architectural issues across multiple files.
Available on npm.
Stars
5
Forks
1
Language
TypeScript
License
MIT
Category
Last pushed
Mar 11, 2026
Commits (30d)
0
Dependencies
4
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mcp/KevinRabun/judges"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mnemox-ai/idea-reality-mcp
Pre-build reality check for AI coding agents. Scans GitHub, HN, npm, PyPI & Product Hunt —...
ForLoopCodes/contextplus
Semantic Intelligence for Large-Scale Engineering. Context+ is an MCP server designed for...
BenAHammond/code-auditor-mcp
🚀 Transform your TypeScript code quality! Lightning-fast auditor catches security flaws,...
mustafacagri/ai-quality-gate
🚀 Kill the Junior AI Era. 🤖 Level up your AI code to Principal standards. No more sloppy lines...
sinedied/grumpydev-mcp
Let the grumpy senior dev review your code with this MCP server