Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Provides modular attack and defense implementations across diverse ML frameworks (TensorFlow, PyTorch, scikit-learn, XGBoost, etc.) and data modalities (images, audio, video, tabular), enabling systematic adversarial testing for classification, detection, and generation tasks. Built on a framework-agnostic estimator abstraction that decouples threat models from underlying model implementations, allowing unified security evaluation pipelines.
5,886 stars. Used by 1 other package. Available on PyPI.
Stars
5,886
Forks
1,296
Language
Python
License
MIT
Category
Last pushed
Dec 12, 2025
Commits (30d)
0
Dependencies
6
Reverse dependents
1
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Trusted-AI/adversarial-robustness-toolbox"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related frameworks
cleverhans-lab/cleverhans
An adversarial example library for constructing attacks, building defenses, and benchmarking both
bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
DSE-MSU/DeepRobust
A pytorch adversarial library for attack and defense methods on images and graphs
cassidylaidlaw/perceptual-advex
Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen...