Adversarial Robustness Toolbox: crafting and analysis of attacks and defense methods for machine learning models

Adversarial Robustness Toolbox

Adversarial Robustness 360 Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats and helps to make AI systems more secure and trustworthy. Machine Learning models are vulnerable to adversarial examples, which are inputs (images, texts, tabular data, etc.) deliberately modified to produce a desired response by the Machine Learning model. ART provides the tools to build and deploy defenses and test them with adversarial attacks.

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary. The attacks implemented in ART allow creating adversarial attacks against Machine Learning models which are required to test defenses with state-of-the-art threat models.

Supported attack and defense methods

The Adversarial Robustness Toolbox contains implementations of the following attacks:

The following defense methods are also supported:

The details of the work from IBM research can be found in the research paper. The ART toolbox is developed with the goal of helping developers better understand

  • Measuring model robustness
  • Model hardening
  • Runtime detection

Download && Tutorial

Copyright (C) IBM Corporation 2018