scispace - formally typeset
Open AccessPosted Content

Improving Transferability of Adversarial Examples with Input Diversity

Reads0
Chats0
TLDR
Zhang et al. as mentioned in this paper proposed to improve the transferability of adversarial examples by creating diverse input patterns, instead of only using the original images to generate adversarial samples, they apply random transformations to the input images at each iteration.
Abstract
Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at this https URL.

read more

Citations
More filters
Posted Content

Efficient Adversarial Training with Transferable Adversarial Examples

TL;DR: This paper shows that there is high transferability between models from neighboring epochs in the same training process, i.e., adversarial examples from one epoch continue to be adversarial in subsequent epochs, and proposes a novel method, Adversarial Training with Transferable Adversaria Examples (ATTA), that can enhance the robustness of trained models and greatly improve the training efficiency by accumulating adversarial perturbations through epochs.
Proceedings ArticleDOI

Improving Adversarial Transferability via Neuron Attribution-based Attacks

TL;DR: The Neuron Attribution-based Attack (NAA), which conducts feature-level attacks with more accurate neuron importance estimations, and first completely attribute a model's output to each neuron in a middle layer to tremendously reduce the computation overhead.
Proceedings ArticleDOI

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

TL;DR: A novel face protection method aiming at constructing adversarial face images that preserve stronger black-box transferability and better visual quality simultaneously, and introduces a new regularization module along with a joint training strategy to reconcile the conflicts between the adversarial noises and the cycle consistence loss in makeup transfer.
Proceedings Article

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains

TL;DR: This paper proposes a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains (unknown classification tasks) and uses a generative model to learn the adversarial function for disrupting low-level features of input images.
Proceedings ArticleDOI

Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input

TL;DR: This work proposes the object-based diverse input (ODI) method that draws an adversarial image on a 3D object and induces the rendered image to be classified as the target class and demonstrates the applicability of the ODI method to adversarial examples on the face verification task and its superior performance improvement.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Proceedings ArticleDOI

Rethinking the Inception Architecture for Computer Vision

TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Proceedings ArticleDOI

Fast R-CNN

TL;DR: Fast R-CNN as discussed by the authors proposes a Fast Region-based Convolutional Network method for object detection, which employs several innovations to improve training and testing speed while also increasing detection accuracy and achieves a higher mAP on PASCAL VOC 2012.
Related Papers (5)