scispace - formally typeset
Search or ask a question
Book ChapterDOI

Recognizing Altered Facial Appearances Due to Aging and Disguise

01 Jan 2014-pp 77-106
TL;DR: The proposed mutual information based age transformation algorithm registers the gallery and probe face images and minimizes the variations in facial features caused due to aging and the results show that the performance of the proposed algorithm is significantly better than existing algorithms.
Abstract: This chapter focuses on recognizing faces with variations in aging and disguise. In the proposed approach, mutual information based age transformation algorithm registers the gallery and probe face images and minimizes the variations in facial features caused due to aging. Further, gallery and probe face images are decomposed at different levels of granularity to extract non-disjoint spatial features. At the first level, face granules are generated by applying Gaussian and Laplacian operators to extract features at different resolutions and image properties. The second level of granularity divides the face image into vertical and horizontal regions of different sizes to specifically handle variations in pose and disguise. At the third level of granularity, the face image is partitioned into small grid structures to extract local features. A neural network architecture based 2D log polar Gabor transform is used to extract binary phase information from each of the face granules. Finally, likelihood ratio test statistics based support vector machine classification approach is used to classify the granular information. The proposed algorithm is evaluated on multiple databases comprising of disguised faces of real people, disguised synthetic face images, faces with aging variations, and disguised faces of actors and actresses from movie clips that also have aging variations. These databases cover a comprehensive set of aging and disguise scenarios. The performance of the proposed algorithm is compared with existing algorithms and the results show that the performance of the proposed algorithm is significantly better than existing algorithms.
Citations
More filters
Proceedings ArticleDOI
01 Apr 2017
TL;DR: The combination of Haar wavelet transform and color moment approaches is utilizes to extract full-informative and influencing feature elements of face image to improve the training step of the age estimation system.
Abstract: Face appearance is one of the most important visual features of human which varies significantly over the aging. Therefore, automatic age estimation is a demanding research topic in the field of facial feature analysis. In the task of age estimation, feature extraction is the first influential step which highly effects on a learning method and its obtained results. The second important step of an age estimation system is training of pattern recognition method based on the extracted feature vector. Considering the importance of the feature extraction and training steps, this paper utilizes the combination of Haar wavelet transform and color moment approaches to extract full-informative and influencing feature elements of face image. To improve the training step, the paper trains a Support Vector Regression (SVR) model, based on the extracted feature vector for age estimation. Experimental results of the proposed method are performed on FG-NET and MORPH datasets and prove the superiority of the method compared with the state-of-the-art methods.

7 citations


Cites background from "Recognizing Altered Facial Appearan..."

  • ...Human face is one of the major properties that characterize the individuals in terms of gender, emotion, or age [1-4]....

    [...]

Proceedings ArticleDOI
01 Jan 2016
TL;DR: A new Local Binary Pattern (LBP)-based feature extraction method which is combined with a weighting scheme to assign high weights to general LBP feature elements (parts of facial image without local latency) whereas assigns low weights to the feature elements of facial images which are covered by the local latency.
Abstract: Age estimation is one of the main problems in the framework of pattern recognition which aims to predict the age of an individual according to his (her) facial features. The difficulty of age estimation will be increased when several parts of facial image are covered by the local latency such as sun glasses or scarf. In this paper a new facial age estimation method is proposed to estimate the age of an individual under the terms of local latency. This paper proposes a new Local Binary Pattern (LBP)-based feature extraction method which is combined with a weighting scheme to assign high weights to general LBP feature elements (parts of facial image without local latency) whereas assigns low weights to the feature elements of facial image which are covered by the local latency. In the proposed method, the weighted feature elements are employed in Multi-Layer Perceptron (MLP) model for age estimation. Evaluation results of the proposed method on three aging datasets such as FG-NET, MORPH and UCI which contain facial image under the local latency proves the ability of the proposed method in age estimation problem even under the terms of local latency.

4 citations


Cites background or methods from "Recognizing Altered Facial Appearan..."

  • ...I. INTRODUCTION Aging is the process of variation of human body which highly impresses the face of individuals such as face size and skin in which face size becomes gradually larger and face skin becomes more and more wrinkle over the aging process [1-4]....

    [...]

  • ...To solve the problem, this paper employs the histogram of LBP binary code to show the distribution of textures of face image as the feature vector....

    [...]

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A new image retrieval which takes facial image and the age of individual as the queries and retrieves the face image or the most similar face image of that person in the selected age is proposed.
Abstract: Aging is the process of appearing some variations on human face which facilitate the task of retrieving the facial image of the same individual at different ages. This paper proposes a new image retrieval which takes facial image and the age of individual as the queries and retrieves the face image or the most similar face image of that person in the selected age. The proposed method utilizes the Zernike Moments (ZM) as a feature extraction approach and Multi-Layer Perceptron (MLP) neural network as a learning method. In this approach, we use aging attributes and orthogonal moments features to imply a new application in the field of face image retrieval. Evaluation of the proposed method on FG-NET and MORPH datasets indicates the superiority of the proposed method over several state-of-the-art methods.

3 citations


Cites background from "Recognizing Altered Facial Appearan..."

  • ...Aging is known as appearance of some variations of human face along with age progression, where several wrinkles become more visible and the face size gets bigger [1-4]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Abstract: We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims.

9,658 citations

Journal ArticleDOI
TL;DR: This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features that is assessed in the face recognition problem under different challenges.
Abstract: This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed

5,563 citations

Journal ArticleDOI
TL;DR: The results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.
Abstract: A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, mutual information (MI), or relative entropy, as a new matching criterion. The method presented in this paper applies MI to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of MI is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the MI criterion is validated for rigid body registration of computed tomography (CT), magnetic resonance (MR), and photon emission tomography (PET) images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction, or other preprocessing steps which makes this method very well suited for clinical applications.

4,773 citations

Journal ArticleDOI
TL;DR: An overview is presented of the medical image processing literature on mutual-information-based registration, an introduction for those new to the field, an overview for those working in the field and a reference for those searching for literature on a specific application.
Abstract: An overview is presented of the medical image processing literature on mutual-information-based registration. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. Methods are classified according to the different aspects of mutual-information-based registration. The main division is in aspects of the methodology and of the application. The part on methodology describes choices made on facets such as preprocessing of images, gray value interpolation, optimization, adaptations to the mutual information measure, and different types of geometrical transformations. The part on applications is a reference of the literature available on different modalities, on interpatient registration and on different anatomical objects. Comparison studies including mutual information are also considered. The paper starts with a description of entropy and mutual information and it closes with a discussion on past achievements and some future challenges.

3,121 citations

Proceedings Article
03 Dec 1996
TL;DR: This presentation reports results of applying the Support Vector method to problems of estimating regressions, constructing multidimensional splines, and solving linear operator equations.
Abstract: The Support Vector (SV) method was recently proposed for estimating regressions, constructing multidimensional splines, and solving linear operator equations [Vapnik, 1995]. In this presentation we report results of applying the SV method to these problems.

2,632 citations