scispace - formally typeset
Proceedings ArticleDOI

Deceiving the Protector: Fooling Face Presentation Attack Detection Algorithms

TLDR
For the first time in the literature, it is possible to "fool" the PAD algorithms using adversarial perturbations using convolutional autoencoder to learn the perturbation network.
Abstract
Face recognition systems are vulnerable to presentation attacks such as replay and 3D masks. In the literature, several presentation attack detection (PAD) algorithms are developed to address this problem. However, for the first time in the literature, this paper showcases that it is possible to "fool" the PAD algorithms using adversarial perturbations. The proposed perturbation approach attacks the presentation attack detection algorithms at the PAD feature level via transformation of features from one class (attack class) to another (real class). The PAD feature tampering network utilizes convolutional autoencoder to learn the perturbations. The proposed algorithm is evaluated with respect to CNN and local binary pattern (LBP) based PAD algorithms. Experiments on three databases, Replay, SMAD, and Face Morph, showcase that the proposed approach increases the equal error rate of PAD algorithms by at least two times. For instance, on the SMAD database, PAD equal error rate (EER) of 20.1% is increased to 55.7% after attacking the PAD algorithm.

read more

Citations
More filters
Journal ArticleDOI

On the Robustness of Face Recognition Algorithms Against Attacks and Bias

TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
Proceedings ArticleDOI

Deceiving Face Presentation Attack Detection via Image Transforms

TL;DR: For the first time, it is shown that simple intensity transforms such as Gamma correction, log transform, and brightness control can help an attacker to deceive face presentation attack detection algorithms.
Proceedings ArticleDOI

CHIF: Convoluted Histogram Image Features for Detecting Silicone Mask based Face Presentation Attack

TL;DR: This research proposes a computationally efficient solution by utilizing the power of CNN filters, and texture encoding for silicone mask based presentation attacks by binarizing the image region after convolving the region with the filters learned via CNN operations.
Journal ArticleDOI

Beyond the Pixel World: A Novel Acoustic-Based Face Anti-Spoofing System for Smartphones

TL;DR: A novel and cost-effective FAS system based on the acoustic modality, named Echo-FAS, which employs the crafted acoustic signal as the probe to perform face liveness detection and provides new insights regarding the development of FAS systems for mobile devices.
Posted Content

MixNet for Generalized Face Presentation Attack Detection

TL;DR: This research has proposed a deep learning-based network termed as MixNet to detect presentation attacks in cross-database and unseen attack settings and shows the effectiveness of the proposed algorithm.
References
More filters
Book ChapterDOI

Visualizing and Understanding Convolutional Networks

TL;DR: A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark.
Proceedings Article

Intriguing properties of neural networks

TL;DR: It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Journal ArticleDOI

What is a support vector machine

TL;DR: Support vector machines are becoming popular in a wide variety of biological applications, but how do they work and what are their most promising applications in the life sciences?
Proceedings ArticleDOI

Return of the Devil in the Details: Delving Deep into Convolutional Nets

TL;DR: It is shown that the data augmentation techniques commonly applied to CNN-based methods can also be applied to shallow methods, and result in an analogous performance boost, and it is identified that the dimensionality of the CNN output layer can be reduced significantly without having an adverse effect on performance.
Journal ArticleDOI

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

TL;DR: A comprehensive survey on adversarial attacks on deep learning in computer vision can be found in this paper, where the authors review the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Related Papers (5)