mrSamDev/llm-moat
TypeScript toolkit for prompt injection detection, sanitization, and LLM input security with rule-based and semantic classifier support.
Stars
—
Forks
—
Language
TypeScript
License
MIT
Category
Last pushed
Mar 20, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/mrSamDev/llm-moat"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
liu00222/Open-Prompt-Injection
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
lakeraai/pint-benchmark
A benchmark for prompt injection detection systems.
R3dShad0w7/PromptMe
PromptMe is an educational project that showcases security vulnerabilities in large language...
cybozu/prompt-hardener
Prompt Hardener analyzes prompt-injection-originated risk in LLM-based agents and applications.
StavC/Here-Comes-the-AI-Worm
Here Comes the AI Worm: Preventing the Propagation of Adversarial Self-Replicating Prompts...