nimad70/VulRAG
Investigating the vulnerability of Large Language Models (LLMs) to misinformation in Retrieval-Augmented Generation (RAG) systems by poisoning vector databases and analyzing LLM responses to identify potential weaknesses and exploitation risks.
No commits in the last 6 months.
Stars
4
Forks
—
Language
Jupyter Notebook
License
CC-BY-SA-4.0
Category
Last pushed
Mar 15, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/nimad70/VulRAG"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
LLAMATOR-Core/llamator
Red Teaming python-framework for testing chatbots and GenAI systems.
sleeepeer/PoisonedRAG
[USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented...
JuliusHenke/autopentest
CLI enabling more autonomous black-box penetration tests using Large Language Models (LLMs)
kelkalot/simpleaudit
Allows to red-team your AI systems through adversarial probing. It is simple, effective, and...
SecurityClaw/SecurityClaw
A modular, skill-based autonomous Security Operations Center (SOC) agent that monitors...