scispace - formally typeset
Search or ask a question

Showing papers on "Three-dimensional face recognition published in 2021"


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed the Embedding Unmasking Model (EUM) operated on top of existing face recognition models, which enabled the EUM to produce embeddings similar to these of unmasked faces of the same identities.

38 citations


Journal ArticleDOI
TL;DR: In this article, a multi-degradation face restoration (MDFR) model is proposed to restore frontalized high-quality faces from the given low-quality ones under arbitrary facial poses, with three distinct novelties.
Abstract: In real-world scenarios, many factors may harm face recognition performance, e.g., large pose, bad illumination, low resolution, blur and noise. To address these challenges, previous efforts usually first restore the low-quality faces to high-quality ones and then perform face recognition. However, most of these methods are stage-wise, which is sub-optimal and deviates from the reality. In this paper, we address all these challenges jointly for unconstrained face recognition. We propose an Multi-Degradation Face Restoration (MDFR) model to restore frontalized high-quality faces from the given low-quality ones under arbitrary facial poses, with three distinct novelties. First, MDFR is a well-designed encoder-decoder architecture which extracts feature representation from an input face image with arbitrary low-quality factors and restores it to a high-quality counterpart. Second, MDFR introduces a pose residual learning strategy along with a 3D-based Pose Normalization Module (PNM), which can perceive the pose gap between the input initial pose and its real-frontal pose to guide the face frontalization. Finally, MDFR can generate frontalized high-quality face images by a single unified network, showing a strong capability of preserving face identity. Qualitative and quantitative experiments on both controlled and in-the-wild benchmarks demonstrate the superiority of MDFR over state-of-the-art methods on both face frontalization and face restoration.

29 citations


Journal ArticleDOI
TL;DR: This paper presents an efficient 3D face recognition approach based on Geodesic Distance (GD) of Riemannian geometry and Random Forest (RF), named GD-FM+RF, which enhances the recognition rate and achieves promising results compared to state of the art methods.
Abstract: 3D Face recognition is being extensively recognized as a biometric performance refers to its non-intrusive environment. In spite of large research on 2-D face recognition, it suffers from low recognition rate due to illumination variations, pose changes, poor image quality, occlusions and facial expression variations, while 3D face models are insensitive to all these conditions. In this paper, we present an efficient 3D face recognition approach based on Geodesic Distance (GD) of Riemannian geometry and Random Forest (RF), named GD-FM+RF. Therefore, to compute the geodesic distance between the specified pairs of the points of 3D faces, we applied Fast Marching (FM) algorithm, in order to solve the Eikonal equation. Then, these extracted features presented by the geodesic facial curves are used by Principal Component Analysis (PCA) algorithm to analyze class separability. Afterwards, these features were utilized as input of RF classifier. In order to test our approach and assess its effectiveness, simulated series of tests were implemented on 3D SHape REtrieval Contest 2008 database (SHREC'08). As a result, our proposed approach enhances the recognition rate and achieves promising results compared to state of the art methods, getting 99.11% in terms of recognition rate.

13 citations


Posted Content
TL;DR: In this article, a modified version of the U-NET segmentation network was used to reconstruct the applied manipulation with a modified U-Net segmentation module. And the modified UNET was used for face detection and recognition.
Abstract: Beautification and augmented reality filters are very popular in applications that use selfie images captured with smartphones or personal devices. However, they can distort or modify biometric features, severely affecting the capability of recognizing individuals' identity or even detecting the face. Accordingly, we address the effect of such filters on the accuracy of automated face detection and recognition. The social media image filters studied either modify the image contrast or illumination or occlude parts of the face with for example artificial glasses or animal noses. We observe that the effect of some of these filters is harmful both to face detection and identity recognition, specially if they obfuscate the eye or (to a lesser extent) the nose. To counteract such effect, we develop a method to reconstruct the applied manipulation with a modified version of the U-NET segmentation network. This is observed to contribute to a better face detection and recognition accuracy. From a recognition perspective, we employ distance measures and trained machine learning algorithms applied to features extracted using a ResNet-34 network trained to recognize faces. We also evaluate if incorporating filtered images to the training set of machine learning approaches are beneficial for identity recognition. Our results show good recognition when filters do not occlude important landmarks, specially the eyes (identification accuracy >99%, EER 92% with the majority of perturbations evaluated, and an EER 12% (EER)

3 citations


Journal ArticleDOI
TL;DR: This paper presents an expression-invariant 3D face recognition method based on transfer learning and Siamese networks that can resolve the small sample size issue and can be used for facial recognition with a single sample.
Abstract: Three-dimensional (3D) face recognition has become a trending research direction in both industry and academia. However, traditional facial recognition methods carry high computational costs and face data storage costs. Deep learning has led to a significant improvement in the recognition rate, but small sample sizes represent a new problem. In this paper, we present an expression-invariant 3D face recognition method based on transfer learning and Siamese networks that can resolve the small sample size issue. First, a landmark detection method utilizing the shape index was employed for facial alignment. Then, a convolutional network (CNN) was constructed with transfer learning and trained with the aligned 3D facial data for face recognition, enabling the CNN to recognize faces regardless of facial expressions. Following that, the weighted trained CNN was shared by a Siamese network to build a 3D facial recognition model that can identify faces even with a small sample size. Our experimental results showed that the proposed method reached a recognition rate of 0.977 on the FRGC database, and the network can be used for facial recognition with a single sample.

1 citations