scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Open-set face recognition across look-alike faces in real-world scenarios

TL;DR: This work introduces three real-world open-set face recognition methods across facial plastic surgery changes and a look-alike face by 3D face reconstruction and sparse representation, and proposes likeness dictionary learning.
Abstract: The open-set problem is among the problems that have significantly changed the performance of face recognition algorithms in real-world scenarios. Open-set operates under the supposition that not all the probes have a pair in the gallery. Most face recognition systems in real-world scenarios focus on handling pose, expression and illumination problems on face recognition. In addition to these challenges, when the number of subjects is increased for face recognition, these problems are intensified by look-alike faces for which there are two subjects with lower intra-class variations. In such challenges, the inter-class similarity is higher than the intra-class variation for these two subjects. In fact, these look-alike faces can be created as intrinsic, situation-based and also by facial plastic surgery. This work introduces three real-world open-set face recognition methods across facial plastic surgery changes and a look-alike face by 3D face reconstruction and sparse representation. Since some real-world databases for face recognition do not have multiple images per person in the gallery, with just one image per subject in the gallery, this paper proposes a novel idea to overcome this challenge by 3D modeling from gallery images and synthesizing them for generating several images. Accordingly, a 3D model is initially reconstructed from frontal face images in a real-world gallery. Then, each 3D reconstructed face in the gallery is synthesized to several possible views and a sparse dictionary is generated based on the synthesized face image for each person. Also, a likeness dictionary is defined and its optimization problem is solved by the proposed method. Finally, the face recognition is performed for open-set face recognition using three proposed representation classifications. Promising results are achieved for face recognition across plastic surgery and look-alike faces on three databases including the plastic surgery face, look-alike face and LFW databases compared to several state-of-the-art methods. Also, several real-world and open-set scenarios are performed to evaluate the proposed method on these databases in real-world scenarios. This paper uses 3D reconstructed models to recognize look-alike faces.A feature is extracted from both facial reconstructed depth and texture images.This paper proposes likeness dictionary learning.Three open-set classification methods are proposed for real-world face recognition.
Citations
More filters
Journal ArticleDOI
Yang Yang1, Chunping Hou1, Lang Yue1, Guan Dai1, Huang Danyang1, Xu Jinchen1 
TL;DR: A model based on Generative Adversarial Network (GAN) is proposed to address the open-set recognition without manual intervention during the training process, and it is shown that the proposed architecture outperforms other variants and is robust on both datasets.
Abstract: Open-set activity recognition remains as a challenging problem because of complex activity diversity. In previous works, extensive efforts have been paid to construct a negative set or set an optimal threshold for the target set. In this paper, a model based on Generative Adversarial Network (GAN), called ‘OpenGAN’ is proposed to address the open-set recognition without manual intervention during the training process. The generator produces fake target samples, which serve as an automatic negative set, and the discriminator is redesigned to output multiple categories together with an ‘unknown’ class. We evaluate the effectiveness of the proposed method on measured micro-Doppler radar dataset and the MOtion CAPture (MOCAP) database from Carnegie Mellon University (CMU). The comparison results with several state-of-the-art methods indicate that OpenGAN provides a promising open-set solution to human activity recognition even under the circumstance with few known classes. Ablation studies are also performed, and it is shown that the proposed architecture outperforms other variants and is robust on both datasets.

93 citations

Journal ArticleDOI
TL;DR: A novel method is proposed for Facial Expression Recognition (FER) using dictionary learning to learn both identity and expression dictionaries simultaneously and demonstrates excellent performance by obtaining high accuracy on all four databases but also outperforms other state-of-the-art approaches.
Abstract: Comprehensive feature extraction method is proposed for facial expression recognition.A sparse dictionary learning approach is proposed for facial expression recognition.A regression dictionary is proposed for regression facial expression classification.It improves the facial expression recognition rate on the CK+, MMI and JAFFE databases. In this paper, a novel method is proposed for Facial Expression Recognition (FER) using dictionary learning to learn both identity and expression dictionaries simultaneously. Accordingly, an automatic and comprehensive feature extraction method is proposed. The proposed method accommodates real-valued scores to a probability of what percent of the given Facial Expression (FE) is present in the input image. To this end, a dual dictionary learning method is proposed to learn both regression and feature dictionaries for FER. Then, two regression classification methods are proposed using a regression model formulated based on dictionary learning and two known classification methods including Sparse Representation Classification (SRC) and Collaborative Representation Classification (CRC). Convincing results are acquired for FER on the CK+, CK, MMI and JAFFE image databases compared to several state-of-the-arts. Also, promising results are obtained from evaluating the proposed method for generalization on other databases. The proposed method not only demonstrates excellent performance by obtaining high accuracy on all four databases but also outperforms other state-of-the-art approaches.

18 citations

Journal ArticleDOI
01 Jun 2017-Optik
TL;DR: The experimental results show that the proposed anti-spoofing technique significantly improves the security of a face recognition system by detecting face liveness and has been prosperously evaluated by detecting photo and video spoofing attacks.
Abstract: Face liveness detection is an advanced research topic nowadays due to its significant security applications in various fields and is of utmost paramountcy to ascertain the physical presence of person. The spoofing problem is a ferocious threat to security of the face recognition systems and it can be minimized by detecting the face liveness. In this paper, a robust anti-spoofing technique for face liveness detection with morphological operations has been proposed by considering eyeblink and mouth movements for procuring maximum reliability during face liveness detection. ZJU Eyeblink dataset, Print-Attack Replay dataset and in-house dataset created in our university have been used for experimental purpose. ZJU Eyeblink dataset has been used to capture eyeblink, Print – Attack Replay dataset has been used to detect photo and video attack based on eyeblink while both eyeblink and mouth movements have been detected simultaneously using in-house dataset. The experimental results show that the proposed anti-spoofing technique significantly improves the security of a face recognition system by detecting face liveness. The efficiency of the proposed technique has been prosperously evaluated by detecting photo and video spoofing attacks.

17 citations

Proceedings ArticleDOI
01 Oct 2018
TL;DR: A novel framework is proposed which transfers fundamental visual features learnt from a generic image dataset to supplement a supervised face recognition model which combines off-the-shelf supervised classifier and a generic, task independent network which encodes information related to basic visual cues such as color, shape, and texture.
Abstract: Plastic surgery and disguise variations are two of the most challenging co-variates of face recognition. The stateof-art deep learning models are not sufficiently successful due to the availability of limited training samples. In this paper, a novel framework is proposed which transfers fundamental visual features learnt from a generic image dataset to supplement a supervised face recognition model. The proposed algorithm combines off-the-shelf supervised classifier and a generic, task independent network which encodes information related to basic visual cues such as color, shape, and texture. Experiments are performed on IIITD plastic surgery face dataset and Disguised Faces in the Wild (DFW) dataset. Results showcase that the proposed algorithm achieves state of the art results on both the datasets. Specifically on the DFW database, the proposed algorithm yields over 87% verification accuracy at 1% false accept rate which is 53.8% better than baseline results com- puted using VGG Face.

16 citations


Cites background from "Open-set face recognition across lo..."

  • ...[17] developed 3D face reconstruction with sparse and collaborative representations....

    [...]

References
More filters
Proceedings ArticleDOI
20 Jun 2005
TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Abstract: We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.

31,952 citations


"Open-set face recognition across lo..." refers methods in this paper

  • ...Popular approaches for feature extraction are Gabor wavelets [17], Local Binary Patterns (LBP) operator [18], Local Gabor Binary Pattern (LGBP) [19], Histogram of Oriented Gradient (HOG) [20] and subspace learning methods, such as, Principal Component Analysis (PCA) [21], Linear Discriminant Analysis (LDA) [22], and etc. Commonly, to handle facial appearance changes including facial makeup and plastic surgery in face recognition, local facial regions are used for feature extraction....

    [...]

  • ...Popular approaches for feature extraction are Gabor wavelets [17], Local Binary Patterns (LBP) operator [18], Local Gabor Binary Pattern (LGBP) [19], Histogram of Oriented Gradient (HOG) [20] and subspace learning methods, such as, Principal Component Analysis (PCA) [21], Linear Discriminant Analysis (LDA) [22], and etc....

    [...]

Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


"Open-set face recognition across lo..." refers methods in this paper

  • ...Popular approaches for feature extraction are Gabor wavelets [17], Local Binary Patterns (LBP) operator [18], Local Gabor Binary Pattern (LGBP) [19], Histogram of Oriented Gradient (HOG) [20] and subspace learning methods, such as, Principal Component Analysis (PCA) [21], Linear Discriminant Analysis (LDA) [22], and etc. Commonly, to handle facial appearance changes including facial makeup and plastic surgery in face recognition, local facial regions are used for feature extraction....

    [...]

  • ...In their methods, several feature extraction methods were used which include PCA, Speeded Up Robust Features (SURF) [23], Local Feature Analysis (LFA) [24], Neural Network Architecture based 2-...

    [...]

  • ...Popular approaches for feature extraction are Gabor wavelets [17], Local Binary Patterns (LBP) operator [18], Local Gabor Binary Pattern (LGBP) [19], Histogram of Oriented Gradient (HOG) [20] and subspace learning methods, such as, Principal Component Analysis (PCA) [21], Linear Discriminant Analysis (LDA) [22], and etc....

    [...]

Journal ArticleDOI
TL;DR: A generalized gray-scale and rotation invariant operator presentation that allows for detecting the "uniform" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis.
Abstract: Presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed "uniform," are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the "uniform" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Experimental results demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns.

14,245 citations


"Open-set face recognition across lo..." refers methods in this paper

  • ...The training data is utilized for selecting EUCLBP/SIFT features for each granule, while the unseen testing data is used for performance evaluation....

    [...]

  • ...In their methods, several feature extraction methods were used which include PCA, Speeded Up Robust Features (SURF) [23], Local Feature Analysis (LFA) [24], Neural Network Architecture based 2-D Log Polar Gabor Transform (GNN) [25], Circular Local Binary Pattern (CLBP) [26], and Fisher Discriminant Analysis (FDA) [27]....

    [...]

  • ...D Log Polar Gabor Transform (GNN) [25], Circular Local Binary Pattern (CLBP) [26], and Fisher Discriminant Analysis (FDA) [27]....

    [...]

Journal ArticleDOI
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.

11,674 citations

Journal ArticleDOI
TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Abstract: We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

9,658 citations


"Open-set face recognition across lo..." refers background in this paper

  • ...However, most sparse representation methods for face recognition require several images for each subject in the gallery [12-15, 37-39]....

    [...]