scispace - formally typeset
Proceedings ArticleDOI

Deceiving the Protector: Fooling Face Presentation Attack Detection Algorithms

TLDR
For the first time in the literature, it is possible to "fool" the PAD algorithms using adversarial perturbations using convolutional autoencoder to learn the perturbation network.
Abstract
Face recognition systems are vulnerable to presentation attacks such as replay and 3D masks. In the literature, several presentation attack detection (PAD) algorithms are developed to address this problem. However, for the first time in the literature, this paper showcases that it is possible to "fool" the PAD algorithms using adversarial perturbations. The proposed perturbation approach attacks the presentation attack detection algorithms at the PAD feature level via transformation of features from one class (attack class) to another (real class). The PAD feature tampering network utilizes convolutional autoencoder to learn the perturbations. The proposed algorithm is evaluated with respect to CNN and local binary pattern (LBP) based PAD algorithms. Experiments on three databases, Replay, SMAD, and Face Morph, showcase that the proposed approach increases the equal error rate of PAD algorithms by at least two times. For instance, on the SMAD database, PAD equal error rate (EER) of 20.1% is increased to 55.7% after attacking the PAD algorithm.

read more

Citations
More filters
Journal ArticleDOI

On the Robustness of Face Recognition Algorithms Against Attacks and Bias

TL;DR: Different ways in which the robustness of a face recognition algorithm is challenged, which can severely affect its intended working are summarized.
Proceedings ArticleDOI

Deceiving Face Presentation Attack Detection via Image Transforms

TL;DR: For the first time, it is shown that simple intensity transforms such as Gamma correction, log transform, and brightness control can help an attacker to deceive face presentation attack detection algorithms.
Proceedings ArticleDOI

CHIF: Convoluted Histogram Image Features for Detecting Silicone Mask based Face Presentation Attack

TL;DR: This research proposes a computationally efficient solution by utilizing the power of CNN filters, and texture encoding for silicone mask based presentation attacks by binarizing the image region after convolving the region with the filters learned via CNN operations.
Journal ArticleDOI

Beyond the Pixel World: A Novel Acoustic-Based Face Anti-Spoofing System for Smartphones

TL;DR: A novel and cost-effective FAS system based on the acoustic modality, named Echo-FAS, which employs the crafted acoustic signal as the probe to perform face liveness detection and provides new insights regarding the development of FAS systems for mobile devices.
Posted Content

MixNet for Generalized Face Presentation Attack Detection

TL;DR: This research has proposed a deep learning-based network termed as MixNet to detect presentation attacks in cross-database and unseen attack settings and shows the effectiveness of the proposed algorithm.
References
More filters
Journal ArticleDOI

Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition

TL;DR: This paper attempts to unravel three aspects related to the robustness of DNNs for face recognition in terms of vulnerabilities to attacks, detecting the singularities by characterizing abnormal filter response behavior in the hidden layers of deep networks; and making corrections to the processing pipeline to alleviate the problem.
Proceedings ArticleDOI

Are Image-Agnostic Universal Adversarial Perturbations for Face Recognition Difficult to Detect?

TL;DR: A simple but efficient approach based on pixel values and Principal Component Analysis as features coupled with a Support Vector Machine as the classifier, to detect image-agnostic universal perturbations.
Proceedings ArticleDOI

SWAPPED! Digital face presentation attack detection via weighted local magnitude pattern

TL;DR: A novel database, termed as SWAPPED — Digital Attack Video Face Database, prepared using Snap chat's application which swaps/stitches two faces and creates videos, which contains bonafide face videos and face swapped videos of multiple subjects is presented.
Posted Content

Anonymizing k-Facial Attributes via Adversarial Perturbations

TL;DR: The proposed adversarial perturbation based algorithm embeds imperceptible noise in an image such that attribute prediction algorithm for the selected attribute yields incorrect classification result, thereby preserving the information according to user's choice.
Book ChapterDOI

Introduction to Face Presentation Attack Detection

TL;DR: A brief introduction to face presentation attack detection can be found in this paper, where the authors present the different presentation attacks that a face recognition system can confront, in which an attacker presents to the sensor, mainly a camera, an artifact (generally a photograph, a video, or a mask) to impersonate a genuine user.
Related Papers (5)