protectai/rebuff

LLM Prompt Injection Detector

Archived
45
/ 100
Emerging

Implements a four-layered defense strategy combining heuristic filtering, LLM-based analysis, vector database embeddings for attack signature learning, and canary token injection to detect prompt leakage. Integrates with OpenAI APIs and vector databases (Pinecone or Chroma) to maintain an evolving attack vault that improves detection over time. Provides SDKs for Python and JavaScript/TypeScript with self-hosting capabilities via Supabase, supporting both synchronous injection detection and asynchronous canary word leak monitoring.

1,439 stars. No commits in the last 6 months.

Archived Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 19 / 25

How are scores calculated?

Stars

1,439

Forks

128

Language

TypeScript

License

Apache-2.0

Last pushed

Aug 07, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/protectai/rebuff"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.