yeraydoblasbueno/llm-security-framework
Testing LLM vulnerabilities (Jailbreaks, Prompt Injections) locally using Python, Ollama, and an advanced LLM-as-a-Judge evaluation system.
14
/ 100
Experimental
No License
No Package
No Dependents
Maintenance
13 / 25
Adoption
0 / 25
Maturity
1 / 25
Community
0 / 25
Stars
—
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Mar 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/yeraydoblasbueno/llm-security-framework"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
protectai/llm-guard
The Security Toolkit for LLM Interactions
74
MaxMLang/pytector
Easy to use LLM Prompt Injection Detection / Detector Python Package with support for local...
70
agencyenterprise/PromptInject
PromptInject is a framework that assembles prompts in a modular fashion to provide a...
55
utkusen/promptmap
a security scanner for custom LLM applications
51
Dicklesworthstone/acip
The Advanced Cognitive Inoculation Prompt
49