serre-lab/Adversarial-Alignment
Scaling-up deep neural networks to improve their performance on ImageNet makes them more tolerant to adversarial attacks, but successful attacks on these models are misaligned with human perception.
No commits in the last 6 months.
Stars
7
Forks
1
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Jun 28, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/serre-lab/Adversarial-Alignment"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion,...
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs