epappas/llmtrace
Zero-code LLM security & observability proxy. Real-time prompt injection detection, PII scanning, and cost control for OpenAI-compatible APIs. Built in Rust.
Implements an ML ensemble detection architecture using majority voting across four specialized detectors (regex, DeBERTa, InjecGuard, PIGuard) to identify prompt injections with 87.6% accuracy. Routes all traffic asynchronously through background security and storage engines, ensuring zero latency impact on LLM requests while supporting streaming responses, circuit breaker protection, and multi-tenant isolation via API keys or custom headers. Integrates seamlessly with OpenAI SDK clients (Python, Node.js), LangChain, and any OpenAI-compatible provider through a single base_url configuration change.
Stars
35
Forks
1
Language
Rust
License
MIT
Category
Last pushed
Mar 11, 2026
Monthly downloads
18
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/epappas/llmtrace"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
microsoft/agent-governance-toolkit
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and...
ucsandman/DashClaw
🛡️Decision infrastructure for AI agents. Intercept actions, enforce guard policies, require...
vstorm-co/pydantic-ai-middleware
Middleware layer for Pydantic AI — intercept, transform & guard agent calls with 7 lifecycle...
mattijsmoens/sovereign-shield
AI security framework: tamper-proof action auditing, prompt injection firewall, ethical...
vstorm-co/pydantic-ai-shields
Guardrail capabilities for Pydantic AI — cost tracking, prompt injection detection, PII...