scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition

01 Oct 2018-pp 1-7
TL;DR: SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms against face recognition and provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark.
Abstract: Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these models are vulnerable to adversarial attacks. These attacks compute perturbations to generate images that decrease the performance of deep learning models. In this research, we have developed a toolbox, termed as SmartBox, for benchmarking the performance of adversarial attack detection and mitigation algorithms against face recognition. SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms. In this research, Extended Yale Face Database B has been used for generating adversarial examples using various attack algorithms such as DeepFool, Gradient methods, Elastic-Net, and $L_{2}$ attack. SmartBox provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark. To assist the research community, the code of SmartBox is made available11http://iab-rubric.org/resources/SmartBox.html.
Citations
More filters
Journal ArticleDOI
TL;DR: A comprehensive review of the recent developments on deep face recognition can be found in this paper, covering broad topics on algorithm designs, databases, protocols, and application scenes, as well as the technical challenges and several promising directions.

353 citations

Journal ArticleDOI
TL;DR: This paper attempts to unravel three aspects related to the robustness of DNNs for face recognition in terms of vulnerabilities to attacks, detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and making corrections to the processing pipeline to alleviate the problem.
Abstract: Deep neural network (DNN) architecture based models have high expressive power and learning capacity. However, they are essentially a black box method since it is not easy to mathematically formulate the functions that are learned within its many layers of representation. Realizing this, many researchers have started to design methods to exploit the drawbacks of deep learning based algorithms questioning their robustness and exposing their singularities. In this paper, we attempt to unravel three aspects related to the robustness of DNNs for face recognition: (i) assessing the impact of deep architectures for face recognition in terms of vulnerabilities to attacks, (ii) detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and (iii) making corrections to the processing pipeline to alleviate the problem. Our experimental evaluation using multiple open-source DNN-based face recognition networks, and three publicly available face databases demonstrates that the performance of deep learning based face recognition algorithms can suffer greatly in the presence of such distortions. We also evaluate the proposed approaches on four existing quasi-imperceptible distortions: DeepFool, Universal adversarial perturbations, $$l_2$$ , and Elastic-Net (EAD). The proposed method is able to detect both types of attacks with very high accuracy by suitably designing a classifier using the response of the hidden layers in the network. Finally, we present effective countermeasures to mitigate the impact of adversarial attacks and improve the overall robustness of DNN-based face recognition.

98 citations


Cites methods from "SmartBox: Benchmarking Adversarial ..."

  • ...Recently, Goel et al. (2018) have prepared the SmartBox toolbox containing several existing adversarial generation, detection, and mitigation algorithms....

    [...]

Proceedings ArticleDOI
01 Jan 2019
TL;DR: A fast landmark manipulation method for generating adversarial faces is proposed, which is approximately 200 times faster than the previous geometric attacks and obtains 99.86% success rate on the state-of-the-art face recognition models.
Abstract: The state-of-the-art performance of deep learning algorithms has led to a considerable increase in the utilization of machine learning in security-sensitive and critical applications. However, it has recently been shown that a small and carefully crafted perturbation in the input space can completely fool a deep model. In this study, we explore the extent to which face recognition systems are vulnerable to geometrically-perturbed adversarial faces. We propose a fast landmark manipulation method for generating adversarial faces, which is approximately 200 times faster than the previous geometric attacks and obtains 99.86% success rate on the state-of-the-art face recognition models. To further force the generated samples to be natural, we introduce a second attack constrained on the semantic structure of the face which has the half speed of the first attack with the success rate of 99.96%. Both attacks are extremely robust against the state-of-the-art defense methods with the success rate of equal or greater than 53.59%. Code is available at https://github.com/alldbi/FLM

63 citations


Cites background from "SmartBox: Benchmarking Adversarial ..."

  • ...However, the noisy structure of the perturbation makes these attacks vulnerable against conventional defense methods such as quantizing [18], smoothing [6] or training on adversarial examples [30]....

    [...]

Proceedings ArticleDOI
01 Oct 2018
TL;DR: A simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations.
Abstract: High performance of deep neural network based systems have attracted many applications in object recognition and face recognition. However, researchers have also demonstrated them to be highly sensitive to adversarial perturbation and hence, tend to be unreliable and lack robustness. While most of the research on adversarial perturbation focuses on image specific attacks, recently, image-agnostic Universal perturbations are proposed which learn the adversarial pattern over training distribution and have broader impact on real-world security applications. Such adversarial attacks can have compounding effect on face recognition where these visually imperceptible attacks can cause mismatches. To defend against adversarial attacks, sophisticated detection approaches are prevalent but most of the existing approaches do not focus on image-agnostic attacks. In this paper, we present a simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations. We also present evaluation metrics, namely adversarial perturbation class classification error rate, original class classification error rate, and average classification error rate, to estimate the performance of adversarial perturbation detection algorithms. The experimental results on multiple databases and different DNN architectures show that it is indeed not required to build complex detection algorithms; rather simpler approaches can yield higher detection rates and lower error rates for image agnostic adversarial perturbation.

54 citations


Cites background from "SmartBox: Benchmarking Adversarial ..."

  • ...[10] have developed a toolbox containing various algorithm corresponds to adversarial generation, detection, and mitigation....

    [...]

Journal ArticleDOI
03 Apr 2020
TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
Abstract: Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working Different types of attacks such as physical presentation attacks, disguise/makeup, digital adversarial attacks, and morphing/tampering using GANs have been discussed We also present a discussion on the effect of bias on face recognition models and showcase that factors such as age and gender variations affect the performance of modern algorithms The paper also presents the potential reasons for these challenges and some of the future research directions for increasing the robustness of face recognition models

53 citations


Cites background or methods from "SmartBox: Benchmarking Adversarial ..."

  • ...Further, Goel et al. (2018) developed the first benchmark toolbox of algorithms for adversarial generation, detection, and mitigation for face recognition....

    [...]

  • ...t the attacks performed using image-agnostic perturbations (i.e., one noise across multiple images) can be detected using a computationally efficient algorithm based on the data distribution. Further, Goel et al. (2018) developed the first benchmark toolbox of algorithms for adversarial generation, detection, and mitigation for face recognition. Recently, Goel et al. (2019) presented one of the best security mechanis...

    [...]

References
More filters
Journal ArticleDOI
TL;DR: Denoising adversarial autoencoders (AAEs) as mentioned in this paper combine denoising and regularization, shaping the distribution of latent space using adversarial training, which can synthesize samples that are more consistent with the input data than those trained without a corruption process.
Abstract: Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabeled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabeled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularization during training to shape the distribution of the encoded data in the latent space. We suggest denoising adversarial autoencoders (AAEs), which combine denoising and regularization, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of AAEs. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.

101 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a straightforward method for detecting adversarial image examples, which can be directly deployed into unmodified off-the-shelf DNN models.
Abstract: Recently, many studies have demonstrated deep neural network (DNN) classifiers can be fooled by the adversarial example, which is crafted via introducing some perturbations into an original sample. Accordingly, some powerful defense techniques were proposed. However, existing defense techniques often require modifying the target model or depend on the prior knowledge of attacks. In this paper, we propose a straightforward method for detecting adversarial image examples, which can be directly deployed into unmodified off-the-shelf DNN models. We consider the perturbation to images as a kind of noise and introduce two classic image processing techniques, scalar quantization and smoothing spatial filter, to reduce its effect. The image entropy is employed as a metric to implement an adaptive noise reduction for different kinds of images. Consequently, the adversarial example can be effectively detected by comparing the classification results of a given sample and its denoised version, without referring to any prior knowledge of attacks. More than 20,000 adversarial examples against some state-of-the-art DNN models are used to evaluate the proposed method, which are crafted with different attack techniques. The experiments show that our detection method can achieve a high overall F1 score of 96.39% and certainly raises the bar for defense-aware attacks.

62 citations

Proceedings Article
Ji Gao1, Beilun Wang1, Zeming Lin2, Weilin Xu1, Yanjun Qi1 
17 Feb 2017
TL;DR: DeepCloak as mentioned in this paper identifies and removes unnecessary features in a DNN model to limit the capacity an attacker can use to generate adversarial samples and therefore increase the robustness against such inputs.
Abstract: Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepCloak. By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepCloak is easy to implement and computationally efficient. Experimental results show that DeepCloak can increase the performance of state-of-the-art DNN models against adversarial samples.

61 citations

Proceedings ArticleDOI
01 Oct 2018
TL;DR: A simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations.
Abstract: High performance of deep neural network based systems have attracted many applications in object recognition and face recognition. However, researchers have also demonstrated them to be highly sensitive to adversarial perturbation and hence, tend to be unreliable and lack robustness. While most of the research on adversarial perturbation focuses on image specific attacks, recently, image-agnostic Universal perturbations are proposed which learn the adversarial pattern over training distribution and have broader impact on real-world security applications. Such adversarial attacks can have compounding effect on face recognition where these visually imperceptible attacks can cause mismatches. To defend against adversarial attacks, sophisticated detection approaches are prevalent but most of the existing approaches do not focus on image-agnostic attacks. In this paper, we present a simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations. We also present evaluation metrics, namely adversarial perturbation class classification error rate, original class classification error rate, and average classification error rate, to estimate the performance of adversarial perturbation detection algorithms. The experimental results on multiple databases and different DNN architectures show that it is indeed not required to build complex detection algorithms; rather simpler approaches can yield higher detection rates and lower error rates for image agnostic adversarial perturbation.

54 citations

Posted Content
TL;DR: This work suggests that autoencoders trained using a denoising criterion achieve higher classification performance and can synthesize samples that are more consistent with the input data than those trained without a corruption process.
Abstract: Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabelled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabelled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clean input samples from corrupted ones. Representations may be further improved by introducing regularisation during training to shape the distribution of the encoded data in latent space. We suggest denoising adversarial autoencoders, which combine denoising and regularisation, shaping the distribution of latent space using adversarial training. We introduce a novel analysis that shows how denoising may be incorporated into the training and sampling of adversarial autoencoders. Experiments are performed to assess the contributions that denoising makes to the learning of representations for classification and sample synthesis. Our results suggest that autoencoders trained using a denoising criterion achieve higher classification performance, and can synthesise samples that are more consistent with the input data than those trained without a corruption process.

48 citations


"SmartBox: Benchmarking Adversarial ..." refers background or methods in this paper

  • ...Denoising AutoEncoder Creswell and Bharath [8] Reconstructs original images from perturbed images....

    [...]

  • ...Denoising Autoencoder: This method is similar to the one proposed by Creswell and Bharath [8] which learns the weights of a denoising autoencoder through adversarial training examples....

    [...]