kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and requires minimal setup.
Provides multilingual scenario-based adversarial probing with support for both local models (via Ollama, Llama.cpp, LM Studio) and API-hosted providers, using a separate judge model to evaluate responses. Built-in safety scenarios and extensible framework allow custom test creation, with results visualized through an interactive web dashboard and comparable across multiple models simultaneously.
Used by 1 other package. Available on PyPI.
Stars
6
Forks
2
Language
Python
License
MIT
Category
Last pushed
Feb 24, 2026
Monthly downloads
40
Commits (30d)
0
Dependencies
2
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/kelkalot/simpleaudit"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
rohansx/cloakpipe
Privacy middleware for LLM & RAG pipelines - consistent pseudonymization, encrypted vault, SSE...