AiGptCode/Advanced-Prompt-Hacking-Tester

This code implements an Advanced Prompt Hacking Tester, which allows users to test the responses of an AI system by generating various types of prompts. It includes methods to generate random prompts, contextual adversarial prompts by modifying the original prompts semantically

13
/ 100
Experimental

This tool helps AI developers and engineers assess the robustness of their AI systems. It takes an existing AI model and generates various types of prompts, including random, semantically modified, and inappropriate ones, to test its responses. The output is a collection of prompt-response pairs, showing how the AI behaves under different inputs.

No commits in the last 6 months.

Use this if you are building or maintaining an AI system and need to systematically test its resilience against diverse and potentially adversarial or problematic inputs.

Not ideal if you are an end-user of an AI system looking for a general prompt engineering or content generation tool.

AI-testing model-evaluation prompt-engineering AI-security robustness-testing
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

12

Forks

Language

Python

License

Last pushed

Apr 14, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/prompt-engineering/AiGptCode/Advanced-Prompt-Hacking-Tester"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.