Jailbreak Attacks Analysis LLM Tools

Tools, datasets, and methods for generating, analyzing, and understanding jailbreak attacks against LLMs—including attack taxonomies, prompt injection techniques, and adversarial methods. Does NOT include defense mechanisms, safety alignment, or general robustness improvements.

There are 16 jailbreak attacks analysis tools tracked. 2 score above 50 (established tier). The highest-rated is wuyoscar/ISC-Bench at 68/100 with 677 stars. 2 of the top 10 are actively maintained.

Get all 16 projects as JSON

curl "https://pt-edge.onrender.com/api/v1/datasets/quality?domain=llm-tools&subcategory=jailbreak-attacks-analysis&limit=20"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.

# Tool Score Tier
1 wuyoscar/ISC-Bench

Internal Safety Collapse: Turning LLMs into a "Jailbroken State" Without "a...

68
Established
2 yueliu1999/Awesome-Jailbreak-on-LLMs

Awesome-Jailbreak-on-LLMs is a collection of state-of-the-art, novel,...

57
Established
3 yiksiu-chan/SpeakEasy

[ICML 2025] Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple...

42
Emerging
4 xirui-li/DrAttack

Official implementation of paper: DrAttack: Prompt Decomposition and...

35
Emerging
5 Techiral/awesome-llm-jailbreaks

Latest AI Jailbreak Payloads & Exploit Techniques for GPT, QWEN, and all LLM Models

34
Emerging
6 tmlr-group/DeepInception

[arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"

34
Emerging
7 NeuralTrust/echo-chamber

Code and examples for Echo Chamber LLM Jailbreak.

32
Emerging
8 CryptoAILab/FigStep

[AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic...

31
Emerging
9 erfanshayegani/Jailbreak-In-Pieces

[ICLR 2024 Spotlight 🔥 ] - [ Best Paper Award SoCal NLP 2023 🏆] - Jailbreak...

26
Experimental
10 AetherPrior/TrickLLM

This repository contains the code for the paper "Tricking LLMs into...

26
Experimental
11 RobustNLP/DeRTa

A novel approach to improve the safety of large language models, enabling...

24
Experimental
12 vicgalle/merging-self-critique-jailbreaks

"Merging Improves Self-Critique Against Jailbreak Attacks", code and models

22
Experimental
13 michael-borck/taxonomy-of-ai-jailbreaks

Categorizes AI jailbreak tactics using taxonomic analysis to enhance LLM...

21
Experimental
14 Yahy5715/jailbreak-defense

Detect and prevent large language model jailbreaks using hidden state causal...

14
Experimental
15 Glor1us/llm-jailbreak-vulnerability-analysis

Experimental study of jailbreak and prompt injection vulnerabilities in...

14
Experimental
16 KDEGroup/MPA

Source code for COLING'25 paper "Monte Carlo Tree Search Based Prompt...

13
Experimental