MindfulwareDev/PromptProof
Plug-and-play guardrail prompts for any LLM — injection defense, hallucination control, bias detection, PII protection, and 50+ more. Includes CLI tools, adversarial test suite, and integration templates for OpenAI, Anthropic, Ollama, LangChain & LlamaIndex.
Stars
1
Forks
1
Language
Python
License
GPL-3.0
Category
Last pushed
Mar 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/MindfulwareDev/PromptProof"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
utkusen/promptmap
a security scanner for custom LLM applications
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt