scispace - formally typeset
Proceedings ArticleDOI

SmartBox: Benchmarking Adversarial Detection and Mitigation Algorithms for Face Recognition

TLDR
SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms against face recognition and provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark.
Abstract
Deep learning models are widely used for various purposes such as face recognition and speech recognition. However, researchers have shown that these models are vulnerable to adversarial attacks. These attacks compute perturbations to generate images that decrease the performance of deep learning models. In this research, we have developed a toolbox, termed as SmartBox, for benchmarking the performance of adversarial attack detection and mitigation algorithms against face recognition. SmartBox is a python based toolbox which provides an open source implementation of adversarial detection and mitigation algorithms. In this research, Extended Yale Face Database B has been used for generating adversarial examples using various attack algorithms such as DeepFool, Gradient methods, Elastic-Net, and $L_{2}$ attack. SmartBox provides a platform to evaluate newer attacks, detection models, and mitigation approaches on a common face recognition benchmark. To assist the research community, the code of SmartBox is made available11http://iab-rubric.org/resources/SmartBox.html.

read more

Citations
More filters
Journal ArticleDOI

Deep face recognition: A survey

TL;DR: A comprehensive review of the recent developments on deep face recognition can be found in this paper, covering broad topics on algorithm designs, databases, protocols, and application scenes, as well as the technical challenges and several promising directions.
Journal ArticleDOI

Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition

TL;DR: This paper attempts to unravel three aspects related to the robustness of DNNs for face recognition in terms of vulnerabilities to attacks, detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and making corrections to the processing pipeline to alleviate the problem.
Proceedings ArticleDOI

Fast Geometrically-Perturbed Adversarial Faces

TL;DR: A fast landmark manipulation method for generating adversarial faces is proposed, which is approximately 200 times faster than the previous geometric attacks and obtains 99.86% success rate on the state-of-the-art face recognition models.
Proceedings ArticleDOI

Are Image-Agnostic Universal Adversarial Perturbations for Face Recognition Difficult to Detect?

TL;DR: A simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations.
Journal ArticleDOI

On the Robustness of Face Recognition Algorithms Against Attacks and Bias

TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
References
More filters
Journal ArticleDOI

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

TL;DR: A comprehensive survey on adversarial attacks on deep learning in computer vision can be found in this paper, where the authors review the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Proceedings ArticleDOI

Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition

TL;DR: A novel class of attacks is defined: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual, and a systematic method to automatically generate such attacks is developed through printing a pair of eyeglass frames.
Posted Content

Universal adversarial perturbations

TL;DR: In this paper, the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability was shown.
Posted Content

Detecting Adversarial Samples from Artifacts.

TL;DR: This paper investigates model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model, and results show a method for implicit adversarial detection that is oblivious to the attack algorithm.
Posted Content

Boosting Adversarial Attacks with Momentum

TL;DR: In this article, a broad class of momentum-based iterative algorithms to boost adversarial attacks is proposed to stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples.
Related Papers (5)