AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection
Lakera Gandalf AI challenge's step by step walkthrough, showcasing real-world prompt injection techniques and LLM security insights.
No commits in the last 6 months.
Stars
13
Forks
1
Language
—
License
MIT
Category
Last pushed
Apr 12, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/AdityaBhatt3010/Hacking-Lakera-Gandalf-AI-via-Prompt-Injection"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
utkusen/promptmap
a security scanner for custom LLM applications
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt