SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition
TL;DR: SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms against face recognition and provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark.
Abstract: Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these models are vulnerable to adversarial attacks. These attacks compute perturbations to generate images that decrease the performance of deep learning models. In this research, we have developed a toolbox, termed as SmartBox, for benchmarking the performance of adversarial attack detection and mitigation algorithms against face recognition. SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms. In this research, Extended Yale Face Database B has been used for generating adversarial examples using various attack algorithms such as DeepFool, Gradient methods, Elastic-Net, and $L_{2}$ attack. SmartBox provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark. To assist the research community, the code of SmartBox is made available11http://iab-rubric.org/resources/SmartBox.html.
...read more
Citations
169 citations
61 citations
Cites methods from "SmartBox: Benchmarking Adversarial ..."
...Recently, Goel et al. (2018) have prepared the SmartBox toolbox containing several existing adversarial generation, detection, and mitigation algorithms....
[...]
39 citations
Cites background from "SmartBox: Benchmarking Adversarial ..."
...[10] have developed a toolbox containing various algorithm corresponds to adversarial generation, detection, and mitigation....
[...]
39 citations
Cites background from "SmartBox: Benchmarking Adversarial ..."
...However, the noisy structure of the perturbation makes these attacks vulnerable against conventional defense methods such as quantizing [18], smoothing [6] or training on adversarial examples [30]....
[...]
28 citations
Cites background or methods from "SmartBox: Benchmarking Adversarial ..."
...Further, Goel et al. (2018) developed the first benchmark toolbox of algorithms for adversarial generation, detection, and mitigation for face recognition....
[...]
...t the attacks performed using image-agnostic perturbations (i.e., one noise across multiple images) can be detected using a computationally efficient algorithm based on the data distribution. Further, Goel et al. (2018) developed the first benchmark toolbox of algorithms for adversarial generation, detection, and mitigation for face recognition. Recently, Goel et al. (2019) presented one of the best security mechanis...
[...]
References
9,436 citations
8,865 citations
"SmartBox: Benchmarking Adversarial ..." refers background in this paper
...Deep learning models have achieved state-of-the-art performance in various computer vision related tasks such as object detection and face recognition [18, 24]....
[...]
7,946 citations
"SmartBox: Benchmarking Adversarial ..." refers background or methods in this paper
...FGSM [15]: It computes the gradient of the loss function of the model concerning the image vector to get the direction of pixel change....
[...]
...[15] Computes gradient of the loss function w....
[...]
...While whitebox attacks such as ElasticNet (EAD) [6], DeepFool [28], L2 [5], Fast Gradient Sign Method (FGSM) [15], Projective Gradient Descent (PGD) [26], and MI-FGSM [10] have complete access and information about the trained network, blackbox attacks such as one pixel attack [32] and universal perturbations [27] have no information about the trained Deep Neural Network (DNN)....
[...]
...While whitebox attacks such as ElasticNet (EAD) [6], DeepFool [28], L2 [5], Fast Gradient Sign Method (FGSM) [15], Projective Gradient Descent (PGD) [26], and MI-FGSM [10] have complete access and information about the trained network, blackbox attacks such as one pixel attack [32] and universal perturbations [27]...
[...]
...FGSM perturbations can be computed by minimizing either the L1, L2 or L∞ norm....
[...]
6,703 citations
"SmartBox: Benchmarking Adversarial ..." refers background or methods in this paper
...Adversarial Training: In adversarial training [33], a new model is trained using the original dataset and adversarial examples with their correct labels....
[...]
...[33] Trains a new model on original and adversarial training images....
[...]
4,705 citations
"SmartBox: Benchmarking Adversarial ..." refers background or methods in this paper
...On the extended Yale B database [13] [21] attack Generation results are summarized in Table 2....
[...]
...Experiments were conducted on the Extended Yale face Database B [13] [21]....
[...]