Simple Black-Box Adversarial Attacks on Deep Neural Networks
Citations
1,702 citations
Cites background or methods from "Simple Black-Box Adversarial Attack..."
...In particular, to the best of our knowledge, the only work before ours that ever mentioned using one-pixel modification to change class labels is carried out by Narodytska and Kasiviswanathan [15]....
[...]
...Several black-box attacks that require no internal knowledge about the target systems, such as gradients, have also been proposed [5], [15], [17]....
[...]
890 citations
651 citations
455 citations
Cites background from "Simple Black-Box Adversarial Attack..."
...The defense is not robust for black-box attacks [56, 60] where an adversary generates malicious examples on a locally trained substitute model....
[...]
442 citations
References
55,235 citations
"Simple Black-Box Adversarial Attack..." refers methods in this paper
...We trained Networkin-Network [15] and VGG [25] for MNIST, CIFAR, SVHN, STL10, with minor adjustments for the corresponding image sizes....
[...]
...VGG is another powerful network that proved to be useful in many applications beyond image classification, like object localization [23]....
[...]
...For the ImageNet1000 dataset, we used pretrained VGG models from [5]....
[...]
...All Caffe VGG models were converted to Torch models using the loadcaffe package [30]....
[...]
...In particular in this paper, we consider the CIFAR10, MNIST, SVHN, STL10, and ImageNet1000 datasets, and two popular network architectures, Networkin-Network [15] and VGG [25]....
[...]
49,914 citations
[...]
38,208 citations
30,843 citations
"Simple Black-Box Adversarial Attack..." refers methods in this paper
...In general, we observed that models trained with batch normalization are somewhat more resilient to adversarial perturbations probably because of the regularization properties of batch normalization [12]....
[...]
...We trained each model in two variants: with and without batch normalization [12]....
[...]
23,183 citations