scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Face illumination normalization on large and small scale features

TL;DR: It is argued that large-scale features of face image are important and contain useful information for face recognition as well as visual quality of normalized image and a novel framework for face illumination normalization is proposed.
Abstract: It is well known that the effect of illumination is mainly on the large-scale features (low-frequency components) of a face image. In solving the illumination problem for face recognition, most (if not all) existing methods either only use extracted small-scale features while discard large-scale features, or perform normalization on the whole image. In the latter case, small-scale features may be distorted when the large-scale features are modified. In this paper, we argue that large-scale features of face image are important and contain useful information for face recognition as well as visual quality of normalized image. Moreover, this paper suggests that illumination normalization should mainly perform on large-scale features of face image rather than the whole face image. Along this line, a novel framework for face illumination normalization is proposed. In this framework, a single face image is first decomposed into large- and small- scale feature images using logarithmic total variation (LTV) model. After that, illumination normalization is performed on large-scale feature image while small-scale feature image is smoothed. Finally, a normalized face image is generated by combination of the normalized large-scale feature image and smoothed small-scale feature image. CMU PIE and (Extended) YaleB face databases with different illumination variations are used for evaluation and the experimental results show that the proposed method outperforms existing methods.
Citations
More filters
Journal ArticleDOI
TL;DR: It is argued that both large-and small-scale features of a face image are important for face restoration and recognition, and it is suggested that illumination normalization should be performed mainly on the large-scale featured rather than on the original face image.
Abstract: A face image can be represented by a combination of large-and small-scale features. It is well-known that the variations of illumination mainly affect the large-scale features (low-frequency components), and not so much the small-scale features. Therefore, in relevant existing methods only the small-scale features are extracted as illumination-invariant features for face recognition, while the large-scale intrinsic features are always ignored. In this paper, we argue that both large-and small-scale features of a face image are important for face restoration and recognition. Moreover, we suggest that illumination normalization should be performed mainly on the large-scale features of a face image rather than on the original face image. A novel method of normalizing both the Small-and Large-scale (S&L) features of a face image is proposed. In this method, a single face image is first decomposed into large-and small-scale features. After that, illumination normalization is mainly performed on the large-scale features, and only a minor correction is made on the small-scale features. Finally, a normalized face image is generated by combining the processed large-and small-scale features. In addition, an optional visual compensation step is suggested for improving the visual quality of the normalized image. Experiments on CMU-PIE, Extended Yale B, and FRGC 2.0 face databases show that by using the proposed method significantly better recognition performance and visual results can be obtained as compared to related state-of-the-art methods.

143 citations


Cites result from "Face illumination normalization on ..."

  • ...Our preliminary work was first reported in [47]....

    [...]

Journal ArticleDOI
TL;DR: This paper proposes the orientated local histogram equalization (OLHE) in brief, which compensates illumination while encoding rich information on the edge orientations, and claims that edge orientation is useful for face recognition.
Abstract: Illumination compensation and normalization play a crucial role in face recognition. The existing algorithms either compensated low-frequency illumination, or captured high-frequency edges. However, the orientations of edges were not well exploited. In this paper, we propose the orientated local histogram equalization (OLHE) in brief, which compensates illumination while encoding rich information on the edge orientations. We claim that edge orientation is useful for face recognition. Three OLHE feature combination schemes were proposed for face recognition: 1) encoded most edge orientations; 2) more compact with good edge-preserving capability; and 3) performed exceptionally well when extreme lighting conditions occurred. The proposed algorithm yielded state-of-the-art performance on AR, CMU PIE, and extended Yale B using standard protocols. We further evaluated the average performance of the proposed algorithm when the images lighted differently were observed, and the proposed algorithm yielded the promising results.

86 citations


Additional excerpts

  • ...The proposed OLHE outperformed all the algorithms in comparison....

    [...]

Journal ArticleDOI
TL;DR: A novel adaptive region-based image preprocessing scheme that enhances face images and facilitates the illumination invariant face recognition task, and is shown to be more suitable for dealing with uneven illuminations in face images.
Abstract: Variable illumination conditions, especially the side lighting effects in face images, form a main obstacle in face recognition systems. To deal with this problem, this paper presents a novel adaptive region-based image preprocessing scheme that enhances face images and facilitates the illumination invariant face recognition task. The proposed method first segments an image into different regions according to its different local illumination conditions, then both the contrast and the edges are enhanced regionally so as to alleviate the side lighting effect. Different from existing contrast enhancement methods, we apply the proposed adaptive region-based histogram equalization on the low-frequency coefficients to minimize illumination variations under different lighting conditions. Besides contrast enhancement, by observing that under poor illuminations the high-frequency features become more important in recognition, we propose enlarging the high-frequency coefficients to make face images more distinguishable. This procedure is called edge enhancement (EdgeE). The EdgeE is also region-based. Compared with existing image preprocessing methods, our method is shown to be more suitable for dealing with uneven illuminations in face images. Experimental results on the representative databases, the Yale B+Extended Yale B database and the Carnegie Mellon University-Pose, Illumination, and Expression database, show that the proposed method significantly improves the performance of face images with illumination variations. The proposed method does not require any modeling and model fitting steps and can be implemented easily. Moreover, it can be applied directly to any single image without using any lighting assumption, and any prior information on 3-D face geometry.

83 citations


Cites background or methods from "Face illumination normalization on ..."

  • ...In [ 26 ], based on LTV model and logarithm discrete cosine transform (LOG-DCT) method [27], a new illumination normalization method was recently proposed....

    [...]

  • ...and the reconstruction with normalized large- and small-scale feature images (RLS) LOG-DCT method [ 26 ]....

    [...]

  • ...Methods Recognition Rate (%) None 35.1 HE 42.2 RHE [22] 85.4 LTV [25] 100 RLS LOG-DCT [ 26 ] 99.9 Our Method 100...

    [...]

Journal ArticleDOI
TL;DR: A penalized strategy is developed, where some penalization terms are used to guide the preimage learning process and Experimental results show that the proposed pre image learning algorithm obtains lower mean square error (MSE) and better visual quality of reconstructed images.
Abstract: Finding the preimage of a feature vector in kernel principal component analysis (KPCA) is of crucial importance when KPCA is applied in some applications such as image preprocessing. Since the exact preimage of a feature vector in the kernel feature space, normally, does not exist in the input data space, an approximate preimage is learned and encouraging results have been reported in the last few years. However, it is still difficult to find a "good" estimation of preimage. As estimation of preimage in kernel methods is ill-posed, how to guide the preimage learning for a better estimation is important and still an open problem. To address this problem, a penalized strategy is developed in this paper, where some penalization terms are used to guide the preimage learning process. To develop an efficient penalized technique, we first propose a two-step general framework, in which a preimage is directly modeled by weighted combination of the observed samples and the weights are learned by some optimization function subject to certain constraints. Compared to existing techniques, this would also give advantages in directly turning preimage learning into the optimization of the combination weights. Under this framework, a penalized methodology is developed by integrating two types of penalizations. First, to ensure learning a well-defined preimage, of which each entry is not out of data range, convexity constraint is imposed for learning the combination weights. More insight effects of the convexity constraint are also explored. Second, a penalized function is integrated as part of the optimization function to guide the preimage learning process. Particularly, the weakly supervised penalty is proposed, discussed, and extensively evaluated along with Laplacian penalty and ridge penalty. It could be further interpreted that the learned preimage can preserve some kind of pointwise conditional mutual information. Finally, KPCA with preimage learning is applied on face image data sets in the aspects of facial expression normalization, face image denoising, recovery of missing parts from occlusion, and illumination normalization. Experimental results show that the proposed preimage learning algorithm obtains lower mean square error (MSE) and better visual quality of reconstructed images.

59 citations

Journal ArticleDOI
TL;DR: This paper proposes to take a logarithm transform, which can change the multiplication of surface albedo and light intensity into an additive model, and shows that the proposed method has promising results, especially in uncontrolled lighting conditions, even mixed with other complicated variations.
Abstract: Lambertian model is a classical illumination model consisting of a surface albedo component and a light intensity component. Some previous researches assume that the light intensity component mainly lies in the large-scale features. They adopt holistic image decompositions to separate it out, but it is difficult to decide the separating point between large-scale and small-scale features. In this paper, we propose to take a logarithm transform, which can change the multiplication of surface albedo and light intensity into an additive model. Then, a difference (substraction) between two pixels in a neighborhood can eliminate most of the light intensity component. By dividing a neighborhood into subregions, edgemaps of multiple scales can be obtained. Then, each edgemap is multiplied by a weight that can be determined by an independent training scheme. Finally, all the weighted edgemaps are combined to form a robust holistic feature map. Extensive experiments on four benchmark data sets in controlled and uncontrolled lighting conditions show that the proposed method has promising results, especially in uncontrolled lighting conditions, even mixed with other complicated variations.

54 citations

References
More filters
Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations


"Face illumination normalization on ..." refers methods in this paper

  • ...NPL-QI takes the advantage of the linear relationship between spherical harmonic bases and PCA bases....

    [...]

  • ...Some representative algorithms include histogram equalization (HE) [8], shape-from-shading (SFS) [9], illumination cone [10], BHE and linear illumination model [11], low-dimensional illumination space representation [12], illumination compensation by truncating discrete cosine transform coefficients in logarithm domain [13], quotient image relighting method (QI) [14] and the non-point light quotient image relighting method (NPL-QI) based on PCA subspace [15] [16]....

    [...]

  • ...Most existing methods for face recognition such as principal component analysis (PCA) [2], independent component analysis (ICA) [3] and linear discriminant analysis (LDA) [19] based methods are sensitive to illumination variations [4]....

    [...]

Journal ArticleDOI
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.

11,674 citations


"Face illumination normalization on ..." refers methods in this paper

  • ...Most existing methods for face recognition such as principal component analysis (PCA) [2], independent component analysis (ICA) [3] and linear discriminant analysis (LDA) [19] based methods are sensitive to illumination variations [4]....

    [...]

Journal ArticleDOI
TL;DR: A generative appearance-based method for recognizing human faces under variation in lighting and viewpoint that exploits the fact that the set of images of an object in fixed pose but under all possible illumination conditions, is a convex cone in the space of images.
Abstract: We present a generative appearance-based method for recognizing human faces under variation in lighting and viewpoint. Our method exploits the fact that the set of images of an object in fixed pose, but under all possible illumination conditions, is a convex cone in the space of images. Using a small number of training images of each face taken with different lighting directions, the shape and albedo of the face can be reconstructed. In turn, this reconstruction serves as a generative model that can be used to render (or synthesize) images of the face under novel poses and illumination conditions. The pose space is then sampled and, for each pose, the corresponding illumination cone is approximated by a low-dimensional linear subspace whose basis vectors are estimated using the generative model. Our recognition algorithm assigns to a test image the identity of the closest approximated illumination cone. Test results show that the method performs almost without error, except on the most extreme lighting directions.

5,027 citations


"Face illumination normalization on ..." refers methods in this paper

  • ...Some representative algorithms include histogram equalization (HE) [8], shape-from-shading (SFS) [9], illumination cone [10], BHE and linear illumination model [11], low-dimensional illumination space representation [12], illumination compensation by truncating discrete cosine transform…...

    [...]

Journal ArticleDOI
TL;DR: Independent component analysis (ICA), a generalization of PCA, was used, using a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons, which was superior to representations based on PCA for recognizing faces across days and changes in expression.
Abstract: A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance.

2,044 citations


"Face illumination normalization on ..." refers methods in this paper

  • ...Most existing methods for face recognition such as principal component analysis (PCA) [2], independent component analysis (ICA) [ 3 ] and linear discriminant analysis (LDA) [19] based methods are sensitive to illumination variations [4]....

    [...]

Proceedings ArticleDOI
20 May 2002
TL;DR: Between October 2000 and December 2000, a database of over 40,000 facial images of 68 people was collected, using the CMU 3D Room to imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions.
Abstract: Between October 2000 and December 2000, we collected a database of over 40,000 facial images of 68 people. Using the CMU (Carnegie Mellon University) 3D Room, we imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions. We call this database the CMU Pose, Illumination and Expression (PIE) database. In this paper, we describe the imaging hardware, the collection procedure, the organization of the database, several potential uses of the database, and how to obtain the database.

1,697 citations