scispace - formally typeset
Search or ask a question
Book ChapterDOI

Convolutional Neural Networks to Protect Against Spoofing Attacks on Biometric Face Authentication

23 Sep 2021-pp 123-146
TL;DR: CNN settings and configurations are reviewed, success and failure indicators of the first and second kind are used, which are reliable and reproducible indicators that characterize the effectiveness of protection against spoofing attacks on biometric authentication on the face.
Abstract: Modern technologies of authentication and authorization of access play a significant role in ensuring the protection of information in various practical applications. We consider the most convenient and used in modern mobile gadgets face authentication, ie when the primary information to provide access are certain features of biometric images of the user’s face. Most of the systems use intelligent processing of biometric images, in particular, artificial intelligence technology and deep learning. But at the same time, as always in cybersecurity, technologies for violating biometric authentication are being studied and researched. In particular, to date, the most common attack is substitution (spoofing), ie when attackers use pre-recorded biometric images to gain unauthorized access to critical information. For example, this could be a photo and/or video image of a person used to unlock their smartphone. Protection against such attacks is very difficult, because it involves the development and study of technologies for detecting signs of life. The most promising in this direction are artificial intelligence techniques, in particular, convolutional neural networks (CNN). This is the practical application of intelligent processing of biometric images and is studied in this article. We review various CNN settings and configurations and experimentally investigate their effect on the effectiveness of signs of life detection. For this purpose, success and failure indicators of the first and second kind are used, which are estimated by the values of cross entropy. These are reliable and reproducible indicators that characterize the effectiveness of protection against spoofing attacks on biometric authentication on the face. The world-famous TensorFlow and OpenCV libraries are used for field experiments, photos and videos of various users are used as source data, including Replay-Attack Database from Idiap Research Institute.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article , the authors review the literature on the use of AI in physiological characteristics recognition, published after 2015, and use the three-layer architecture of the IoT (i.e., sensing layer, feature layer, and algorithm layer) to guide the discussion of existing approaches and their limitations.
References
More filters
Journal ArticleDOI
TL;DR: In this article, it is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under another and gives the same results, although perhaps not in the same time.

14,937 citations

Book
01 Jan 1988

8,937 citations

Book
D. O. Hebb1
01 Jan 1949
TL;DR: In this paper, the authors discuss the first stage of perception: growth of the assembly, the phase sequence, and the problem of Motivational Drift, which is the line of attack.
Abstract: Contents: Introduction. The Problem and the Line of Attack. Summation and Learning in Perception. Field Theory and Equipotentiality. The First Stage of Perception: Growth of the Assembly. Perception of a Complex: The Phase Sequence. Development of the Learning Capacity. Higher and Lower Processes Related to Learning. Problems of Motivation: Pain and Hunger. The Problem of Motivational Drift. Emotional Disturbances. The Growth and Decline of Intelligence.

5,038 citations

Journal ArticleDOI
TL;DR: An efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA) that outperforms the state-of-the-art methods in spoof detection and highlights the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.
Abstract: Automatic face recognition is now widely used in applications ranging from deduplication of identity to authentication of mobile payment. This popularity of face recognition has raised concerns about face spoof attacks (also known as biometric sensor presentation attacks), where a photo or video of an authorized person’s face could be used to gain access to facilities or services. While a number of face spoof detection techniques have been proposed, their generalization ability has not been adequately addressed. We propose an efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA). Four different features (specular reflection, blurriness, chromatic moment, and color diversity) are extracted to form the IDA feature vector. An ensemble classifier, consisting of multiple SVM classifiers trained for different face spoof attacks (e.g., printed photo and replayed video), is used to distinguish between genuine (live) and spoof faces. The proposed approach is extended to multiframe face spoof detection in videos using a voting-based scheme. We also collect a face spoof database, MSU mobile face spoofing database (MSU MFSD), using two mobile devices (Google Nexus 5 and MacBook Air) with three types of spoof attacks (printed photo, replayed video with iPhone 5S, and replayed video with iPad Air). Experimental results on two public-domain face spoof databases (Idiap REPLAY-ATTACK and CASIA FASD), and the MSU MFSD database show that the proposed approach outperforms the state-of-the-art methods in spoof detection. Our results also highlight the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.

716 citations

Proceedings Article
27 Sep 2012
TL;DR: This paper inspects the potential of texture features based on Local Binary Patterns (LBP) and their variations on three types of attacks: printed photographs, and photos and videos displayed on electronic screens of different sizes and concludes that LBP show moderate discriminability when confronted with a wide set of attack types.
Abstract: Spoofing attacks are one of the security traits that biometric recognition systems are proven to be vulnerable to. When spoofed, a biometric recognition system is bypassed by presenting a copy of the biometric evidence of a valid user. Among all biometric modalities, spoofing a face recognition system is particularly easy to perform: all that is needed is a simple photograph of the user. In this paper, we address the problem of detecting face spoofing attacks. In particular, we inspect the potential of texture features based on Local Binary Patterns (LBP) and their variations on three types of attacks: printed photographs, and photos and videos displayed on electronic screens of different sizes. For this purpose, we introduce REPLAY-ATTACK, a novel publicly available face spoofing database which contains all the mentioned types of attacks. We conclude that LBP, with ∼15% Half Total Error Rate, show moderate discriminability when confronted with a wide set of attack types.

707 citations