KevinRabun/judges

MCP server with specialized judges to evaluate AI-generated code for security, cost, scalability, cloud readiness, and best practices.

47
/ 100
Emerging

Combines **deterministic pattern matching with AST analysis** (offline, zero LLM calls) and **LLM-powered deep-review prompts** across 45 specialized domains, plus **200+ auto-fix patches** for instant remediation. Ships as an **MCP server** (stdio-based for Claude Desktop, Cursor, VS Code), **CLI tool**, and **programmatic API**, with GitHub Action integration and SARIF output for Code Scanning. Operates as an independent quality gate for AI-generated code, providing **confidence scoring, policy profiles, and plain-language risk summaries** without replacing linters—covering authentication strategy, data sovereignty, cost patterns, and architectural issues across multiple files.

Available on npm.

Maintenance 13 / 25
Adoption 4 / 25
Maturity 18 / 25
Community 12 / 25

How are scores calculated?

Stars

5

Forks

1

Language

TypeScript

License

MIT

Category

code-review-mcp

Last pushed

Mar 11, 2026

Commits (30d)

0

Dependencies

4

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/mcp/KevinRabun/judges"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.