scispace - formally typeset
Open AccessProceedings ArticleDOI

Improving Transferability of Adversarial Examples With Input Diversity

Reads0
Chats0
TLDR
DI-2FGSM as discussed by the authors improves the transferability of adversarial examples by creating diverse input patterns, instead of only using the original images to generate adversarial samples, the method applies random transformations to the input images at each iteration.
Abstract
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https://github.com/cihangxie/DI-2-FGSM.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

TL;DR: This article proposed a translation-invariant attack method to generate more transferable adversarial examples against the defense models by optimizing a perturbation over an ensemble of translated images, the generated adversarial example is less sensitive to the white-box model being attacked and has better transferability.
Posted Content

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

TL;DR: A translation-invariant attack method to generate more transferable adversarial examples against the defense models, which fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.
Proceedings ArticleDOI

Disentangling Adversarial Robustness and Generalization

TL;DR: This work assumes an underlying, low-dimensional data manifold and shows that regular robustness and generalization are not necessarily contradicting goals, which implies that both robust and accurate models are possible.
Proceedings Article

Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks

TL;DR: NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models and demonstrate that the attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient- based attacks.
Posted Content

Ensemble Adversarial Training: Attacks and Defenses.

TL;DR: Ensemble adversarial training as discussed by the authors augments training data with perturbations transferred from other models to improve robustness to black-box attacks, which has been shown to yield models with strong robustness against adversarial examples.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Posted Content

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

TL;DR: Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.
Proceedings ArticleDOI

Rethinking the Inception Architecture for Computer Vision

TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Related Papers (5)