Open-Prompt-Injection and pint-benchmark

Open-Prompt-Injection
53
Established
pint-benchmark
47
Emerging
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 21/25
Maintenance 6/25
Adoption 10/25
Maturity 16/25
Community 15/25
Stars: 406
Forks: 64
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 166
Forks: 21
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
No Package No Dependents
No Package No Dependents

About Open-Prompt-Injection

liu00222/Open-Prompt-Injection

This repository provides a benchmark for prompt injection attacks and defenses in LLMs

This toolkit helps evaluate and implement defenses against 'prompt injection' attacks on applications built with large language models (LLMs). It takes an LLM, a target task (like sentiment analysis), and potential injected instructions, then measures how well the LLM resists or identifies these malicious prompts. Anyone building or managing LLM-powered applications who needs to ensure their AI models behave as intended, without being hijacked by unexpected user input, would use this.

LLM security AI application development prompt engineering model risk management cybersecurity

About pint-benchmark

lakeraai/pint-benchmark

A benchmark for prompt injection detection systems.

This project offers a standardized way to compare how well different AI systems can spot and block malicious 'prompt injection' attacks. It takes various text inputs, including those designed to trick AI models, and evaluates if a detection system correctly identifies them as harmful or safe. AI developers, security engineers, and MLOps teams can use this to rigorously assess and improve their AI system's defenses.

AI security LLM evaluation prompt engineering MLOps AI governance

Scores updated daily from GitHub, PyPI, and npm data. How scores work