scispace - formally typeset
Search or ask a question
Book ChapterDOI

A robust face recognition through statistical learning of local features

13 Nov 2011-pp 335-341
TL;DR: This work tries to develop a face recognition method which is robust to local variations through statistical learning of local features, and shows that the proposed method is more robust toLocal variations than the conventional methods using statistical features or local features.
Abstract: Among various signals that can be obtained from humans, facial image is one of the hottest topics in the field of pattern recognition and machine learning due to its diverse variations. In order to deal with the variations such as illuminations, expressions, poses, and occlusions, it is important to find a discriminative feature which can keep core information of original images as well as can be robust to the undesirable variations. In the present work, we try to develop a face recognition method which is robust to local variations through statistical learning of local features. Like conventional local approaches, the proposed method represents an image as a set of local feature descriptors. The local feature descriptors are then treated as a random samples, and we estimate the probability density of each local features representing each local area of facial images. In the classification stage, the estimated probability density is used for defining a weighted distance measure between two images. Through computational experiments on benchmark data sets, we show that the proposed method is more robust to local variations than the conventional methods using statistical features or local features.
Citations
More filters
Posted Content
TL;DR: In this paper, a review of face detection under occlusion, a preliminary step in face recognition, is presented and the authors analyze the motivations, innovations, pros and cons, and the performance of representative approaches.
Abstract: The limited capacity to recognize faces under occlusions is a long-standing problem that presents a unique challenge for face recognition systems and even for humans. The problem regarding occlusion is less covered by research when compared to other challenges such as pose variation, different expressions, etc. Nevertheless, occluded face recognition is imperative to exploit the full potential of face recognition for real-world applications. In this paper, we restrict the scope to occluded face recognition. First, we explore what the occlusion problem is and what inherent difficulties can arise. As a part of this review, we introduce face detection under occlusion, a preliminary step in face recognition. Second, we present how existing face recognition methods cope with the occlusion problem and classify them into three categories, which are 1) occlusion robust feature extraction approaches, 2) occlusion aware face recognition approaches, and 3) occlusion recovery based face recognition approaches. Furthermore, we analyze the motivations, innovations, pros and cons, and the performance of representative approaches for comparison. Finally, future challenges and method trends of occluded face recognition are thoroughly discussed.

54 citations

Journal ArticleDOI
TL;DR: A novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data, and it is shown that the new method performs competitively even when the training images are corrupted.
Abstract: In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.

50 citations


Cites methods from "A robust face recognition through s..."

  • ...A similar approach is taken by [34] and [35], where the training examples are used to learn statistics for the appearances of each local face area....

    [...]

Journal ArticleDOI
TL;DR: This paper introduces face detection under occlusions, a preliminary step in face recognition, and presents how existing face recognition methods cope with the occlusion problem and classify them into three categories.
Abstract: The limited capacity to recognize faces under occlusions is a long-standing problem that presents a unique challenge for face recognition systems and even for humans. The problem regarding occlusion is less covered by research when compared to other challenges such as pose variation, different expressions, etc. Nevertheless, occluded face recognition is imperative to exploit the full potential of face recognition for real-world applications. In this paper, we restrict the scope to occluded face recognition. First, we explore what the occlusion problem is and what inherent difficulties can arise. As a part of this review, we introduce face detection under occlusion, a preliminary step in face recognition. Second, we present how existing face recognition methods cope with the occlusion problem and classify them into three categories, which are 1) occlusion robust feature extraction approaches, 2) occlusion aware face recognition approaches, and 3) occlusion recovery based face recognition approaches. Furthermore, we analyze the motivations, innovations, pros and cons, and the performance of representative approaches for comparison. Finally, future challenges and method trends of occluded face recognition are thoroughly discussed.

41 citations


Cites methods from "A robust face recognition through s..."

  • ...[124] Sunglasses&Scarf S-TR-3IPS 100 100 89....

    [...]

  • ...Learning based features [11], [27], [31]–[33], [53], [65], [73], [74], [90], [93], [96], [107], [109], [124], [125], [133], [134], [136], [148], [153], [162], [163], [169], [187]...

    [...]

  • ...In paper [124], a face recognition method is proposed...

    [...]

  • ...Feature Extraction [1], [2], [11], [23], [27], [32], [33], [48], [49], [53], [60], [62], [65], [74], [80], [88], [93], [108], [109], [124], [125], [128], [134], [143], [148], [153], [162], [163], [169], [180], [187], [191], [193]...

    [...]

Journal ArticleDOI
TL;DR: Tuming et al. as mentioned in this paper proposed a method that takes advantage of the combination of deep learning and Local Binary Pattern (LBP) features to recognize the masked face by utilizing RetinaFace, a joint extra-supervised and self-vised multi-task learning face detector.
Abstract: Face recognition is one of the most common biometric authentication methods as its feasibility while convenient use. Recently, the COVID-19 pandemic is dramatically spreading throughout the world, which seriously leads to negative impacts on people's health and economy. Wearing masks in public settings is an effective way to prevent viruses from spreading. However, masked face recognition is a highly challenging task due to the lack of facial feature information. In this paper, we propose a method that takes advantage of the combination of deep learning and Local Binary Pattern (LBP) features to recognize the masked face by utilizing RetinaFace, a joint extra-supervised and self-supervised multi-task learning face detector that can deal with various scales of faces, as a fast yet effective encoder. In addition, we extract local binary pattern features from masked face's eye, forehead and eyebow areas and combine them with features learnt from RetinaFace into a unified framework for recognizing masked faces. In addition, we collected a dataset named COMASK20 from 300 subjects at our institution. In the experiment, we compared our proposed system with several state of the art face recognition methods on the published Essex dataset and our self-collected dataset COMASK20. With the recognition results of 87% f1-score on the COMASK20 dataset and 98% f1-score on the Essex dataset, these demonstrated that our proposed system outperforms Dlib and InsightFace, which has shown the effectiveness and suitability of the proposed method. The COMASK20 dataset is available on https://github.com/tuminguyen/COMASK20 for research purposes.

27 citations

Journal ArticleDOI
TL;DR: This paper attempts to develop a face recognition method that is robust to partial variations through statistical learning of local features by representing a facial image as a set of local feature descriptors such as scale-invariant feature transform (SIFT).

23 citations


Cites background or methods from "A robust face recognition through s..."

  • ...This technique can also be considered as a block-based estimation approach proposed in the previous work [16], in which a single Gaussian pdf pmðκÞ 1⁄4 Gðκjμm; IÞ is separately defined for each mth block....

    [...]

  • ...This kind of combination of SIFT features and its statistical learning was primarily tried in [16], and the present study is an extension of this work to develop a more general framework for the estimation of the probability density and the utilization of the estimated values in obtaining distance measures....

    [...]

  • ...While these related works [3,14,15] mainly focus on partial occlusion and involve a specifically designed module of detecting and excluding occlusions, our previous work [16] suggests a similarity measure that combines the distance between features and the weights of features corresponding to the appearance of abnormal partial distortions....

    [...]

  • ...In the case of D L1 and D mul L2 , we utilized the location information in order to estimate each pdf pmðκÞ for each block by the method used in [16]....

    [...]

  • ...In the next section, experimental results using various distance measures including [16,21] are shown....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Abstract: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.

46,906 citations


"A robust face recognition through s..." refers background or methods in this paper

  • ...SIFT [4] uses scale-space Difference-Of-Gaussian (DOG) to detect keypoints in images....

    [...]

  • ...In addition, some local features such as SIFT are originally designed to have robustness to image variations such as scale and translations[4]....

    [...]

Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


"A robust face recognition through s..." refers methods in this paper

  • ...Statistical feature extraction methods such as PCA and LDA[2,3] can give efficient low dimensional features through learning the variational properties of...

    [...]

  • ...We compare the proposed method with the conventional local approaches[6] and the conventional statistical methods[2,3]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors provide an up-to-date critical survey of still-and video-based face recognition research, and provide some insights into the studies of machine recognition of faces.
Abstract: As one of the most successful applications of image analysis and understanding, face recognition has recently received significant attention, especially during the past several years. At least two reasons account for this trend: the first is the wide range of commercial and law enforcement applications, and the second is the availability of feasible technologies after 30 years of research. Even though current machine recognition systems have reached a certain level of maturity, their success is limited by the conditions imposed by many real applications. For example, recognition of face images acquired in an outdoor environment with changes in illumination and/or pose remains a largely unsolved problem. In other words, current systems are still far away from the capability of the human perception system.This paper provides an up-to-date critical survey of still- and video-based face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an up-to-date review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition, relevant topics such as psychophysical studies, system evaluation, and issues of illumination and pose variation are covered.

6,384 citations

01 Jan 1998

3,650 citations

Proceedings ArticleDOI
25 Oct 2010
TL;DR: VLFeat is an open and portable library of computer vision algorithms that includes rigorous implementations of common building blocks such as feature detectors, feature extractors, (hierarchical) k-means clustering, randomized kd-tree matching, and super-pixelization.
Abstract: VLFeat is an open and portable library of computer vision algorithms. It aims at facilitating fast prototyping and reproducible research for computer vision scientists and students. It includes rigorous implementations of common building blocks such as feature detectors, feature extractors, (hierarchical) k-means clustering, randomized kd-tree matching, and super-pixelization. The source code and interfaces are fully documented. The library integrates directly with MATLAB, a popular language for computer vision research.

3,417 citations