adversarial-attacks-pytorch and PGD-pytorch

PGD-pytorch
47
Emerging
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 24/25
Maintenance 0/25
Adoption 10/25
Maturity 16/25
Community 21/25
Stars: 2,147
Forks: 369
Downloads:
Commits (30d): 0
Language: Python
License: MIT
Stars: 159
Forks: 40
Downloads:
Commits (30d): 0
Language: Jupyter Notebook
License: MIT
Stale 6m No Package No Dependents
Stale 6m No Package No Dependents

About adversarial-attacks-pytorch

Harry24k/adversarial-attacks-pytorch

PyTorch implementation of adversarial attacks [torchattacks]

This tool helps machine learning engineers and researchers assess the robustness of their deep learning models. It takes an existing PyTorch model and input data (like images) and generates 'adversarial examples' — slightly modified inputs designed to trick the model. The output is a set of these adversarial examples, which can then be used to test how well the model resists subtle attacks.

model security deep learning robustness computer vision AI safety adversarial machine learning

About PGD-pytorch

Harry24k/PGD-pytorch

A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"

This project helps machine learning engineers and researchers evaluate how vulnerable their image classification models are to malicious inputs. It takes a pre-trained image classification model and an image, then generates a slightly modified (adversarial) image that tricks the model into misclassifying it. This is useful for understanding and improving the robustness of AI systems in security-sensitive applications.

deep-learning-security model-robustness adversarial-machine-learning image-recognition-defense AI-safety

Scores updated daily from GitHub, PyPI, and npm data. How scores work