scispace - formally typeset
Proceedings ArticleDOI

SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition

Reads0
Chats0
TLDR
SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms against face recognition and provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark.
Abstract
Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these models are vulnerable to adversarial attacks. These attacks compute perturbations to generate images that decrease the performance of deep learning models. In this research, we have developed a toolbox, termed as SmartBox, for benchmarking the performance of adversarial attack detection and mitigation algorithms against face recognition. SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms. In this research, Extended Yale Face Database B has been used for generating adversarial examples using various attack algorithms such as DeepFool, Gradient methods, Elastic-Net, and $L_{2}$ attack. SmartBox provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark. To assist the research community, the code of SmartBox is made available11http://iab-rubric.org/resources/SmartBox.html.

read more

Citations
More filters
Journal ArticleDOI

Deep face recognition: A survey

TL;DR: A comprehensive review of the recent developments on deep face recognition can be found in this paper, covering broad topics on algorithm designs, databases, protocols, and application scenes, as well as the technical challenges and several promising directions.
Journal ArticleDOI

Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition

TL;DR: This paper attempts to unravel three aspects related to the robustness of DNNs for face recognition in terms of vulnerabilities to attacks, detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and making corrections to the processing pipeline to alleviate the problem.
Proceedings ArticleDOI

Fast Geometrically-Perturbed Adversarial Faces

TL;DR: A fast landmark manipulation method for generating adversarial faces is proposed, which is approximately 200 times faster than the previous geometric attacks and obtains 99.86% success rate on the state-of-the-art face recognition models.
Proceedings ArticleDOI

Are Image-Agnostic Universal Adversarial Perturbations for Face Recognition Difficult to Detect?

TL;DR: A simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations.
Journal ArticleDOI

On the Robustness of Face Recognition Algorithms Against Attacks and Bias

TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
References
More filters
Posted Content

Learning Adversary-Resistant Deep Neural Networks.

TL;DR: A generic approach to escalate a DNN's resistance to adversarial samples is proposed, making it robust even if the underlying learning algorithm is revealed, and it typically provides superior classification performance and resistance in comparison with state-of-art solutions.
Posted Content

Visible Progress on Adversarial Images and a New Saliency Map.

TL;DR: It is demonstrated that adversarial perturbations which modify YUV images are more conspicuous and less pathological than in RGB space, and a new saliency map is introduced to better understand misclassification.
Related Papers (5)