scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Face anti-spoofing with multifeature videolet aggregation

TL;DR: A novel multi-feature evidence aggregation method for face spoofing detection that fuses evidence from features encoding of both texture and motion properties in the face and also the surrounding scene regions and provides robustness to different attacks.
Abstract: Biometric systems can be attacked in several ways and the most common being spoofing the input sensor. Therefore, anti-spoofing is one of the most essential prerequisite against attacks on biometric systems. For face recognition it is even more vulnerable as the image capture is non-contact based. Several anti-spoofing methods have been proposed in the literature for both contact and non-contact based biometric modalities often using video to study the temporal characteristics of a real vs. spoofed biometric signal. This paper presents a novel multi-feature evidence aggregation method for face spoofing detection. The proposed method fuses evidence from features encoding of both texture and motion (liveness) properties in the face and also the surrounding scene regions. The feature extraction algorithms are based on a configuration of local binary pattern and motion estimation using histogram of oriented optical flow. Furthermore, the multi-feature windowed videolet aggregation of these orthogonal features coupled with support vector machine-based classification provides robustness to different attacks. We demonstrate the efficacy of the proposed approach by evaluating on three standard public databases: CASIA-FASD, 3DMAD and MSU-MFSD with equal error rate of 3.14%, 0%, and 0%, respectively.
Citations
More filters
Posted Content
TL;DR: Wang et al. as discussed by the authors extended the central difference convolutional networks (CDCN) to a multi-modal version, intending to capture intrinsic spoofing patterns among three modalities (RGB, depth and infrared).
Abstract: Face anti-spoofing (FAS) plays a vital role in securing face recognition systems from presentation attacks. Existing multi-modal FAS methods rely on stacked vanilla convolutions, which is weak in describing detailed intrinsic information from modalities and easily being ineffective when the domain shifts (e.g., cross attack and cross ethnicity). In this paper, we extend the central difference convolutional networks (CDCN) \cite{yu2020searching} to a multi-modal version, intending to capture intrinsic spoofing patterns among three modalities (RGB, depth and infrared). Meanwhile, we also give an elaborate study about single-modal based CDCN. Our approach won the first place in "Track Multi-Modal" as well as the second place in "Track Single-Modal (RGB)" of ChaLearn Face Anti-spoofing Attack Detection Challenge@CVPR2020 \cite{liu2020cross}. Our final submission obtains 1.02$\pm$0.59\% and 4.84$\pm$1.79\% ACER in "Track Multi-Modal" and "Track Single-Modal (RGB)", respectively. The codes are available at{this https URL}.

2 citations

Posted Content
TL;DR: This work introduces a novel uncertainty-aware attention scheme that independently learns to weigh the relative contributions of the main and proxy tasks, preventing the over-confident issue with traditional attention modules and proposes attribute-assisted hard negative mining to disentangle liveness-irrelevant features with liveness features during learning.
Abstract: Face anti-spoofing (FAS) seeks to discriminate genuine faces from fake ones arising from any type of spoofing attack. Due to the wide varieties of attacks, it is implausible to obtain training data that spans all attack types. We propose to leverage physical cues to attain better generalization on unseen domains. As a specific demonstration, we use physically guided proxy cues such as depth, reflection, and material to complement our main anti-spoofing (a.k.a liveness detection) task, with the intuition that genuine faces across domains have consistent face-like geometry, minimal reflection, and skin material. We introduce a novel uncertainty-aware attention scheme that independently learns to weigh the relative contributions of the main and proxy tasks, preventing the over-confident issue with traditional attention modules. Further, we propose attribute-assisted hard negative mining to disentangle liveness-irrelevant features with liveness features during learning. We evaluate extensively on public benchmarks with intra-dataset and inter-dataset protocols. Our method achieves the superior performance especially in unseen domain generalization for FAS.

2 citations


Cites background from "Face anti-spoofing with multifeatur..."

  • ...Some other works [1, 4, 49, 16, 61] utilize the temporal information assuming videos are available....

    [...]

Book ChapterDOI
30 Nov 2020
TL;DR: An attendedauxiliary supervision approach for radical exploitation is presented, which automatically concentrates on the most important regions of the input, that is, those that make significant contributions towards distinguishing the spoof cases from live faces, leading to notable improvements in performance.
Abstract: Recent face anti-spoofing methods have achieved impressive performance in recognizing the subtle discrepancies between live and spoof faces. However, due to directly holistic extraction and the resulting ineffective clues used for the models’ perception, the previous methods are still subject to setbacks of not being generalizable to the diversity of presentation attacks. In this paper, we present an attended-auxiliary supervision approach for radical exploitation, which automatically concentrates on the most important regions of the input, that is, those that make significant contributions towards distinguishing the spoof cases from live faces. Through a multi-task learning approach, the proposed network is able to locate the most relevant/attended/highly selective regions more accurately than previous methods, leading to notable improvements in performance. We also suggest that introducing spatial attention mechanisms can greatly enhance our model’s perception of the important information, partly intensifying the resilience of our model against diverse types of face anti-spoofing attacks. We carried out extensive experiments on publicly available face anti-spoofing datasets, showing that our approach and hypothesis converge to some extent and demonstrating state-of-the-art performance.

2 citations


Cites background from "Face anti-spoofing with multifeatur..."

  • ...To cope with these difficulties, researchers have approached these problems in another way that maps input images onto other domains, namely HSV, YCbCr[18, 19], temporal domains[20, 21], and Fourier spectra[22]....

    [...]

Journal ArticleDOI
TL;DR: TransFeng et al. as discussed by the authors proposed a Transformer-based Face Anti-Spoofing (TransFAS) model to explore comprehensive facial parts for face anti-spoofing.
Abstract: Face anti-spoofing (FAS) is important to secure face recognition systems. Deep learning has obtained great success in this area, however, most existing approaches fail to consider comprehensive relation-aware local representations of live and spoof faces. To address this issue, we propose a Transformer-based Face Anti-Spoofing (TransFAS) model to explore comprehensive facial parts for FAS. Besides the multi-head self-attention which explores relations among local patches in the same layer, we propose cross-layer relation-aware attentions (CRA) to adaptively integrate local patches from different layers. Furthermore, to effectively fuse hierarchical features, we explore the best hierarchical feature fusion (HFF) structure, which can capture the complementary information between low-level artifacts and high-level semantic features for the spoofing patterns. With these novel modules, TransFAS not only improves the generalization capability of the classical vision transformer, but also achieves SOTA performance on multiple benchmarks, demonstrating the superiority of the transformer-based model for FAS.

2 citations

Posted Content
TL;DR: A novel frame level FAS method based on Central Difference Convolution (CDC), which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information is proposed.
Abstract: Face anti-spoofing (FAS) plays a vital role in face recognition systems. Most state-of-the-art FAS methods 1) rely on stacked convolutions and expert-designed network, which is weak in describing detailed fine-grained information and easily being ineffective when the environment varies (e.g., different illumination), and 2) prefer to use long sequence as input to extract dynamic features, making them difficult to deploy into scenarios which need quick response. Here we propose a novel frame level FAS method based on Central Difference Convolution (CDC), which is able to capture intrinsic detailed patterns via aggregating both intensity and gradient information. A network built with CDC, called the Central Difference Convolutional Network (CDCN), is able to provide more robust modeling capacity than its counterpart built with vanilla convolution. Furthermore, over a specifically designed CDC search space, Neural Architecture Search (NAS) is utilized to discover a more powerful network structure (CDCN++), which can be assembled with Multiscale Attention Fusion Module (MAFM) for further boosting performance. Comprehensive experiments are performed on six benchmark datasets to show that 1) the proposed method not only achieves superior performance on intra-dataset testing (especially 0.2% ACER in Protocol-1 of OULU-NPU dataset), 2) it also generalizes well on cross-dataset testing (particularly 6.5% HTER from CASIA-MFSD to Replay-Attack datasets). The codes are available at \href{this https URL}{this https URL}.

2 citations


Cites methods from "Face anti-spoofing with multifeatur..."

  • ...Several classical local descriptors such as LBP [7, 15], SIFT [44], SURF [9], HOG [29] and DoG [45] are utilized to extract frame level features while video level methods usually capture dynamic clues like dynamic texture [28], micro-motion [53] and eye blinking [41]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: A novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered and both the VLBP and LBP-TOP clearly outperformed the earlier approaches.
Abstract: Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation

2,653 citations


"Face anti-spoofing with multifeatur..." refers background in this paper

  • ...Dynamic texture features such as LBP-TOP [22] are studied in this regard....

    [...]

Journal ArticleDOI
TL;DR: The inherent strengths of biometrics-based authentication are outlined, the weak links in systems employing biometric authentication are identified, and new solutions for eliminating these weak links are presented.
Abstract: Because biometrics-based authentication offers several advantages over other authentication methods, there has been a significant surge in the use of biometrics for user authentication in recent years. It is important that such biometrics-based authentication systems be designed to withstand attacks when employed in security-critical applications, especially in unattended remote applications such as e-commerce. In this paper we outline the inherent strengths of biometrics-based authentication, identify the weak links in systems employing biometrics-based authentication, and present new solutions for eliminating some of these weak links. Although, for illustration purposes, fingerprint authentication is used throughout, our analysis extends to other biometrics-based methods.

1,709 citations


"Face anti-spoofing with multifeatur..." refers background in this paper

  • ...Biometric systems have different points of vulnerability such as sensor attacks, overriding feature extraction, tampering feature representation, corrupting matcher, tampering stored template, and overriding decision [18]....

    [...]

01 Jan 2009
TL;DR: This thesis builds a human-assisted motion annotation system to obtain ground-truth motion, missing in the literature, for natural video sequences, and proposes SIFT flow, a new framework for image parsing by transferring the metadata information from the images in a large database to an unknown query image.
Abstract: The focus of motion analysis has been on estimating a flow vector for every pixel by matching intensities. In my thesis, I will explore motion representations beyond the pixel level and new applications to which these representations lead. I first focus on analyzing motion from video sequences. Traditional motion analysis suffers from the inappropriate modeling of the grouping relationship of pixels and from a lack of ground-truth data. Using layers as the interface for humans to interact with videos, we build a human-assisted motion annotation system to obtain ground-truth motion, missing in the literature, for natural video sequences. Furthermore, we show that with the layer representation, we can detect and magnify small motions to make them visible to human eyes. Then we move to a contour presentation to analyze the motion for textureless objects under occlusion. We demonstrate that simultaneous boundary grouping and motion analysis can solve challenging data, where the traditional pixel-wise motion analysis fails. In the second part of my thesis, I will show the benefits of matching local image structures instead of intensity values. We propose SIFT flow that establishes dense, semantically meaningful correspondence between two images across scenes by matching pixel-wise SIFT features. Using SIFT flow, we develop a new framework for image parsing by transferring the metadata information, such as annotation, motion and depth, from the images in a large database to an unknown query image. We demonstrate this framework using new applications such as predicting motion from a single image and motion synthesis via object transfer. Based on SIFT flow, we introduce a nonparametric scene parsing system using label transfer, with very promising experimental results suggesting that our system outperforms state-of-the-art techniques based on training classifiers. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

899 citations


"Face anti-spoofing with multifeatur..." refers methods in this paper

  • ...The orientation based optical flow vector is computed by solving the optimization problem 1 using conjugate gradient method [12]....

    [...]

Journal ArticleDOI
TL;DR: An efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA) that outperforms the state-of-the-art methods in spoof detection and highlights the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.
Abstract: Automatic face recognition is now widely used in applications ranging from deduplication of identity to authentication of mobile payment. This popularity of face recognition has raised concerns about face spoof attacks (also known as biometric sensor presentation attacks), where a photo or video of an authorized person’s face could be used to gain access to facilities or services. While a number of face spoof detection techniques have been proposed, their generalization ability has not been adequately addressed. We propose an efficient and rather robust face spoof detection algorithm based on image distortion analysis (IDA). Four different features (specular reflection, blurriness, chromatic moment, and color diversity) are extracted to form the IDA feature vector. An ensemble classifier, consisting of multiple SVM classifiers trained for different face spoof attacks (e.g., printed photo and replayed video), is used to distinguish between genuine (live) and spoof faces. The proposed approach is extended to multiframe face spoof detection in videos using a voting-based scheme. We also collect a face spoof database, MSU mobile face spoofing database (MSU MFSD), using two mobile devices (Google Nexus 5 and MacBook Air) with three types of spoof attacks (printed photo, replayed video with iPhone 5S, and replayed video with iPad Air). Experimental results on two public-domain face spoof databases (Idiap REPLAY-ATTACK and CASIA FASD), and the MSU MFSD database show that the proposed approach outperforms the state-of-the-art methods in spoof detection. Our results also highlight the difficulty in separating genuine and spoof faces, especially in cross-database and cross-device scenarios.

716 citations


"Face anti-spoofing with multifeatur..." refers background or methods in this paper

  • ...• On MSU dataset, HOOF obtains tremendous improvement in EER (from 30.41 to 2.50...

    [...]

  • ...Similarly, at the Inter Feature Fusion stage, the correlation of 0.51, 0.62, and 0.66 is observed for CASIA, MSU, and 3DMAD datasets, respectively....

    [...]

  • ...MSU dataset contains a higher fraction of replay attack videos compared to CASIA....

    [...]

  • ...• Performance of the Proposed Approach: The proposed fusion approach (using HOOF and multi-LBP with face and scene aggregated over videolets) provides 0% EER with uncontrolled illumination and background on both MSU and 3DMAD datasets....

    [...]

  • ...Orthogonal to the LBP texture descriptors based approaches, quality assessment metrics such as specular reflection, blurring and color density are also explored for anti-spoofing [10], [20]....

    [...]

Proceedings Article
27 Sep 2012
TL;DR: This paper inspects the potential of texture features based on Local Binary Patterns (LBP) and their variations on three types of attacks: printed photographs, and photos and videos displayed on electronic screens of different sizes and concludes that LBP show moderate discriminability when confronted with a wide set of attack types.
Abstract: Spoofing attacks are one of the security traits that biometric recognition systems are proven to be vulnerable to. When spoofed, a biometric recognition system is bypassed by presenting a copy of the biometric evidence of a valid user. Among all biometric modalities, spoofing a face recognition system is particularly easy to perform: all that is needed is a simple photograph of the user. In this paper, we address the problem of detecting face spoofing attacks. In particular, we inspect the potential of texture features based on Local Binary Patterns (LBP) and their variations on three types of attacks: printed photographs, and photos and videos displayed on electronic screens of different sizes. For this purpose, we introduce REPLAY-ATTACK, a novel publicly available face spoofing database which contains all the mentioned types of attacks. We conclude that LBP, with ∼15% Half Total Error Rate, show moderate discriminability when confronted with a wide set of attack types.

707 citations


Additional excerpts

  • ...The face anti-spoofing problem is extensively studied in literature, particularly with the introduction of Print Attack dataset [1], Replay Attack dataset [5], CASIA-FASD spoofing dataset [21], 3DMAD database [7], and MSU mobile face spoofing database [20]....

    [...]