protectai/rebuff
LLM Prompt Injection Detector
ArchivedImplements a four-layered defense strategy combining heuristic filtering, LLM-based analysis, vector database embeddings for attack signature learning, and canary token injection to detect prompt leakage. Integrates with OpenAI APIs and vector databases (Pinecone or Chroma) to maintain an evolving attack vault that improves detection over time. Provides SDKs for Python and JavaScript/TypeScript with self-hosting capabilities via Supabase, supporting both synchronous injection detection and asynchronous canary word leak monitoring.
1,439 stars. No commits in the last 6 months.
Stars
1,439
Forks
128
Language
TypeScript
License
Apache-2.0
Category
Last pushed
Aug 07, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/protectai/rebuff"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
utkusen/promptmap
a security scanner for custom LLM applications
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt