adversarial-attacks-pytorch and PGD-pytorch
About adversarial-attacks-pytorch
Harry24k/adversarial-attacks-pytorch
PyTorch implementation of adversarial attacks [torchattacks]
This tool helps machine learning engineers and researchers assess the robustness of their deep learning models. It takes an existing PyTorch model and input data (like images) and generates 'adversarial examples' — slightly modified inputs designed to trick the model. The output is a set of these adversarial examples, which can then be used to test how well the model resists subtle attacks.
About PGD-pytorch
Harry24k/PGD-pytorch
A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
This project helps machine learning engineers and researchers evaluate how vulnerable their image classification models are to malicious inputs. It takes a pre-trained image classification model and an image, then generates a slightly modified (adversarial) image that tricks the model into misclassifying it. This is useful for understanding and improving the robustness of AI systems in security-sensitive applications.
Scores updated daily from GitHub, PyPI, and npm data. How scores work