scispace - formally typeset
Search or ask a question
Author

Jukka Komulainen

Bio: Jukka Komulainen is an academic researcher from University of Oulu. The author has contributed to research in topics: Spoofing attack & Face detection. The author has an hindex of 16, co-authored 27 publications receiving 2086 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis that exploits the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces.
Abstract: Research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images, hence discarding the chroma component, which can be very useful for discriminating fake faces from genuine ones. This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis. We exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces. More specifically, the feature histograms are computed over each image band separately. Extensive experiments on the three most challenging benchmark data sets, namely, the CASIA face anti-spoofing database, the replay-attack database, and the MSU mobile face spoof database, showed excellent results compared with the state of the art. More importantly, unlike most of the methods proposed in the literature, our proposed approach is able to achieve stable performance across all the three benchmark data sets. The promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts.

449 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: This work introduces a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions, acquisition devices and presentation attack instruments.
Abstract: The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection (PAD) methods performing robustly in practical mobile authentication scenarios. This is mainly due to the fact that the existing public face PAD datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations. In this present work, we introduce a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). This publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones. The high-quality print and videoreplay attacks were created using two different printers and two different display devices. Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. The baseline results using color texture analysis based face PAD method demonstrate the challenging nature of the database.

416 citations

Proceedings ArticleDOI
01 Sep 2013
TL;DR: This work provides the first investigation in research literature that attempts to detect the presence of spoofing medium in the observed scene and shows that the proposed approach has promising generalization capabilities.
Abstract: The face recognition community has finally started paying more attention to the long-neglected problem of spoofing attacks and the number of countermeasures is gradually increasing. Fairly good results have been reported on the publicly available databases but it is reasonable to assume that there exists no superior anti-spoofing technique due to the varying nature of attack scenarios and acquisition conditions. Therefore, we propose to approach the problem of face spoofing as a set of attack-specific subproblems that are solvable with a proper combination of complementary countermeasures. Inspired by how we humans can perform reliable spoofing detection only based on the available scene and context information, this work provides the first investigation in research literature that attempts to detect the presence of spoofing medium in the observed scene. We experiment with two publicly available databases consisting of several fake face attacks of different nature under varying conditions and imaging qualities. The experiments show excellent results beyond the state of the art. More importantly, our cross-database evaluation depicts that the proposed approach has promising generalization capabilities.

261 citations

Posted Content
TL;DR: In this article, a color local binary pattern descriptor was proposed to analyze the joint color-texture information from the luminance and the chrominance channels, which can be used for face spoofing detection.
Abstract: Research on face spoofing detection has mainly been focused on analyzing the luminance of the face images, hence discarding the chrominance information which can be useful for discriminating fake faces from genuine ones. In this work, we propose a new face anti-spoofing method based on color texture analysis. We analyze the joint color-texture information from the luminance and the chrominance channels using a color local binary pattern descriptor. More specifically, the feature histograms are extracted from each image band separately. Extensive experiments on two benchmark datasets, namely CASIA face anti-spoofing and Replay-Attack databases, showed excellent results compared to the state-of-the-art. Most importantly, our inter-database evaluation depicts that the proposed approach showed very promising generalization capabilities.

247 citations

Journal ArticleDOI
TL;DR: This letter proposes a novel solution based on describing the facial appearance by applying Fisher vector encoding on speeded-up robust features extracted from different color spaces that outperforms the state of the art and yields very promising generalization capabilities, even when only limited training data are used.
Abstract: The vulnerabilities of face biometric authentication systems to spoofing attacks have received a significant attention during the recent years. Some of the proposed countermeasures have achieved impressive results when evaluated on intratests, i.e., the system is trained and tested on the same database. Unfortunately, most of these techniques fail to generalize well to unseen attacks, e.g., when the system is trained on one database and then evaluated on another database. This is a major concern in biometric antispoofing research that is mostly overlooked. In this letter, we propose a novel solution based on describing the facial appearance by applying Fisher vector encoding on speeded-up robust features extracted from different color spaces. The evaluation of our countermeasure on three challenging benchmark face-spoofing databases, namely the CASIA face antispoofing database, the replay-attack database, and MSU mobile face spoof database, showed excellent and stable performance across all the three datasets. Most importantly, in interdatabase tests, our proposed approach outperforms the state of the art and yields very promising generalization capabilities, even when only limited training data are used.

239 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA) that outperforms the state-of-the-art methods in spoof detection and highlights the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.
Abstract: Automatic face recognition is now widely used in applications ranging from deduplication of identity to authentication of mobile payment. This popularity of face recognition has raised concerns about face spoof attacks (also known as biometric sensor presentation attacks), where a photo or video of an authorized person’s face could be used to gain access to facilities or services. While a number of face spoof detection techniques have been proposed, their generalization ability has not been adequately addressed. We propose an efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA). Four different features (specular reflection, blurriness, chromatic moment, and color diversity) are extracted to form the IDA feature vector. An ensemble classifier, consisting of multiple SVM classifiers trained for different face spoof attacks (e.g., printed photo and replayed video), is used to distinguish between genuine (live) and spoof faces. The proposed approach is extended to multiframe face spoof detection in videos using a voting-based scheme. We also collect a face spoof database, MSU mobile face spoofing database (MSU MFSD), using two mobile devices (Google Nexus 5 and MacBook Air) with three types of spoof attacks (printed photo, replayed video with iPhone 5S, and replayed video with iPad Air). Experimental results on two public-domain face spoof databases (Idiap REPLAY-ATTACK and CASIA FASD), and the MSU MFSD database show that the proposed approach outperforms the state-of-the-art methods in spoof detection. Our results also highlight the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.

716 citations

Proceedings ArticleDOI
01 Dec 2018
TL;DR: This work describes a new method to expose fake face videos generated with deep neural network models based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos.
Abstract: The new developments in deep generative networks have significantly improve the quality and efficiency in generating realistically-looking fake face videos. In this work, we describe a new method to expose fake face videos generated with deep neural network models. Our method is based on detection of eye blinking in the videos, which is a physiological signal that is not well presented in the synthesized fake videos. Our method is evaluated over benchmarks of eye-blinking detection datasets and shows promising performance on detecting videos generated with DNN based software DeepFake.

532 citations

Proceedings ArticleDOI
18 Jun 2018
TL;DR: This paper argues the importance of auxiliary supervision to guide the learning toward discriminative and generalizable cues, and introduces a new face anti-spoofing database that covers a large range of illumination, subject, and pose variations.
Abstract: Face anti-spoofing is crucial to prevent face recognition systems from a security breach. Previous deep learning approaches formulate face anti-spoofing as a binary classification problem. Many of them struggle to grasp adequate spoofing cues and generalize poorly. In this paper, we argue the importance of auxiliary supervision to guide the learning toward discriminative and generalizable cues. A CNN-RNN model is learned to estimate the face depth with pixel-wise supervision, and to estimate rPPG signals with sequence-wise supervision. The estimated depth and rPPG are fused to distinguish live vs. spoof faces. Further, we introduce a new face anti-spoofing database that covers a large range of illumination, subject, and pose variations. Experiments show that our model achieves the state-of-the-art results on both intra- and cross-database testing.

502 citations

Journal ArticleDOI
TL;DR: This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis that exploits the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces.
Abstract: Research on non-intrusive software-based face spoofing detection schemes has been mainly focused on the analysis of the luminance information of the face images, hence discarding the chroma component, which can be very useful for discriminating fake faces from genuine ones. This paper introduces a novel and appealing approach for detecting face spoofing using a colour texture analysis. We exploit the joint colour-texture information from the luminance and the chrominance channels by extracting complementary low-level feature descriptions from different colour spaces. More specifically, the feature histograms are computed over each image band separately. Extensive experiments on the three most challenging benchmark data sets, namely, the CASIA face anti-spoofing database, the replay-attack database, and the MSU mobile face spoof database, showed excellent results compared with the state of the art. More importantly, unlike most of the methods proposed in the literature, our proposed approach is able to achieve stable performance across all the three benchmark data sets. The promising results of our cross-database evaluation suggest that the facial colour texture representation is more stable in unknown conditions compared with its gray-scale counterparts.

449 citations

Proceedings ArticleDOI
01 May 2017
TL;DR: This work introduces a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions, acquisition devices and presentation attack instruments.
Abstract: The vulnerabilities of face-based biometric systems to presentation attacks have been finally recognized but yet we lack generalized software-based face presentation attack detection (PAD) methods performing robustly in practical mobile authentication scenarios. This is mainly due to the fact that the existing public face PAD datasets are beginning to cover a variety of attack scenarios and acquisition conditions but their standard evaluation protocols do not encourage researchers to assess the generalization capabilities of their methods across these variations. In this present work, we introduce a new public face PAD database, OULU-NPU, aiming at evaluating the generalization of PAD methods in more realistic mobile authentication scenarios across three covariates: unknown environmental conditions (namely illumination and background scene), acquisition devices and presentation attack instruments (PAI). This publicly available database consists of 5940 videos corresponding to 55 subjects recorded in three different environments using high-resolution frontal cameras of six different smartphones. The high-quality print and videoreplay attacks were created using two different printers and two different display devices. Each of the four unambiguously defined evaluation protocols introduces at least one previously unseen condition to the test set, which enables a fair comparison on the generalization capabilities between new and existing approaches. The baseline results using color texture analysis based face PAD method demonstrate the challenging nature of the database.

416 citations