null1024-ws/Poisoning-Attack-on-Code-Completion-Models
USENIX Security'24 Paper Repo
This project helps security researchers and developers understand and demonstrate vulnerabilities in AI-powered code completion tools. It shows how malicious code, disguised as safe, can be injected into the training data of these models. The output is a method to create poisoned code examples that can trick code completion systems into suggesting insecure code.
No commits in the last 6 months.
Use this if you are a security researcher or red team professional aiming to analyze and expose potential weaknesses in large language models used for code completion.
Not ideal if you are looking for a tool to fix vulnerabilities or write secure code directly, as this focuses on demonstrating attack vectors.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
May 12, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/null1024-ws/Poisoning-Attack-on-Code-Completion-Models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OWASP/www-project-top-10-for-large-language-model-applications
OWASP Top 10 for Large Language Model Apps (Part of the GenAI Security Project)
esbmc/esbmc-ai
Automated Code Repair suite powered by ESBMC and LLMs.
Mohannadcse/AlloySpecRepair
An Empirical Evaluation of Pre-trained Large Language Models for Repairing Declarative Formal...
GURPREETKAURJETHRA/LLM-SECURITY
Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024
lambdasec/autogrep
Autogrep automates Semgrep rule generation and filtering by using LLMs to analyze vulnerability...