scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Illumination invariant face recognition

01 Oct 2005-Pattern Recognition (Elsevier)-Vol. 38, Iss: 10, pp 1705-1716
TL;DR: A novel approach to handle the illumination problem that can restore a face image captured under arbitrary lighting conditions to one with frontal illumination by using a ratio-image between the face image and a reference face image, both of which are blurred by a Gaussian filter.
Abstract: The appearance of a face will vary drastically when the illumination changes. Variations in lighting conditions make face recognition an even more challenging and difficult task. In this paper, we propose a novel approach to handle the illumination problem. Our method can restore a face image captured under arbitrary lighting conditions to one with frontal illumination by using a ratio-image between the face image and a reference face image, both of which are blurred by a Gaussian filter. An iterative algorithm is then used to update the reference image, which is reconstructed from the restored image by means of principal component analysis (PCA), in order to obtain a visually better restored image. Image processing techniques are also used to improve the quality of the restored image. To evaluate the performance of our algorithm, restored images with frontal illumination are used for face recognition by means of PCA. Experimental results demonstrate that face recognition using our method can achieve a higher recognition rate based on the Yale B database and the Yale database. Our algorithm has several advantages over other previous algorithms: (1) it does not need to estimate the face surface normals and the light source directions, (2) it does not need many images captured under different lighting conditions for each person, nor a set of bootstrap images that includes many images with different illuminations, and (3) it does not need to detect accurate positions of some facial feature points or to warp the image for alignment, etc.
Citations
More filters
Proceedings ArticleDOI
12 Dec 2007
TL;DR: An extensive and up-to-date survey of the existing techniques to address the illumination variation problem is presented and covers the passive techniques that attempt to solve the illumination problem by studying the visible light images in which face appearance has been altered by varying illumination.
Abstract: The illumination variation problem is one of the well-known problems in face recognition in uncontrolled environment. In this paper an extensive and up-to-date survey of the existing techniques to address this problem is presented. This survey covers the passive techniques that attempt to solve the illumination problem by studying the visible light images in which face appearance has been altered by varying illumination, as well as the active techniques that aim to obtain images of face modalities invariant to environmental illumination.

260 citations


Cites background from "Illumination invariant face recogni..."

  • ...uk illumination invariant face recognition can be found in [75] [33][14][32]....

    [...]

Journal ArticleDOI
TL;DR: An efficient representation method insensitive to varying illumination is proposed for human face recognition, which can effectively eliminate the effect of uneven illumination and greatly improve the recognition results.
Abstract: In this paper, an efficient representation method insensitive to varying illumination is proposed for human face recognition. Theoretical analysis based on the human face model and the illumination model shows that the effects of varying lighting on a human face image can be modeled by a sequence of multiplicative and additive noises. Instead of computing these noises, which is very difficult for real applications, we aim to reduce or even remove their effect. In our method, a local normalization technique is applied to an image, which can effectively and efficiently eliminate the effect of uneven illuminations while keeping the local statistical properties of the processed image the same as in the corresponding image under normal lighting condition. After processing, the image under varying illumination will have similar pixel values to the corresponding image that is under normal lighting condition. Then, the processed images are used for face recognition. The proposed algorithm has been evaluated based on the Yale database, the AR database, the PIE database, the YaleB database and the combined database by using different face recognition methods such as PCA, ICA and Gabor wavelets. Consistent and promising results were obtained, which show that our method can effectively eliminate the effect of uneven illumination and greatly improve the recognition results.

157 citations

Journal ArticleDOI
TL;DR: The goal of this study is to discuss the significant challenges involved in the adaptation of existing face recognition algorithms to build successful systems that can be employed in the real world and propose several possible future directions for face recognition.
Abstract: Face recognition has received significant attention because of its numerous applications in access control, law enforcement, security, surveillance, Internet communication and computer entertainment. Although significant progress has been made, the state-of-the-art face recognition systems yield satisfactory performance only under controlled scenarios and they degrade significantly when confronted with real-world scenarios. The real-world scenarios have unconstrained conditions such as illumination and pose variations, occlusion and expressions. Thus, there remain plenty of challenges and opportunities ahead. Latterly, some researchers have begun to examine face recognition under unconstrained conditions. Instead of providing a detailed experimental evaluation, which has been already presented in the referenced works, this study serves more as a guide for readers. Thus, the goal of this study is to discuss the significant challenges involved in the adaptation of existing face recognition algorithms to build successful systems that can be employed in the real world. Then, it discusses what has been achieved so far, focusing specifically on the most successful algorithms, and overviews the successes and failures of these algorithms to the subject. It also proposes several possible future directions for face recognition. Thus, it will be a good starting point for research projects on face recognition as useful techniques can be isolated and past errors can be avoided.

139 citations

Journal ArticleDOI
TL;DR: A new coding scheme, namely directional binary code (DBC), is proposed for near-infrared face recognition and three protocols are provided to evaluate and compare the proposed DBC method with baseline face recognition methods, including Gabor based Eigenface, Fisherface and LBP on the PolyU-NIRFD database.
Abstract: This paper introduces the establishment of PolyU near-infrared face database (PolyU-NIRFD) and presents a new coding scheme for face recognition. The PolyU-NIRFD contains images from 350 subjects, each contributing about 100 samples with variations of pose, expression, focus, scale, time, etc. In total, 35,000 samples were collected in the database. The PolyU-NIRFD provides a platform for researchers to develop and evaluate various near-infrared face recognition techniques under large scale, controlled and uncontrolled conditions. A new coding scheme, namely directional binary code (DBC), is then proposed for near-infrared face recognition. Finally, we provide three protocols to evaluate and compare the proposed DBC method with baseline face recognition methods, including Gabor based Eigenface, Fisherface and LBP (local binary pattern) on the PolyU-NIRFD database. In addition, we also conduct experiments on the visible light band FERET database to further validate the proposed DBC scheme.

134 citations

Journal ArticleDOI
01 Jan 2013
TL;DR: A novel framework for real-world face recognition in uncontrolled settings named Face Analysis for Commercial Entities (FACE), which adopts reliability indices, which estimate the “acceptability” of the final identification decision made by the classifier.
Abstract: Face recognition has made significant advances in the last decade, but robust commercial applications are still lacking. Current authentication/identification applications are limited to controlled settings, e.g., limited pose and illumination changes, with the user usually aware of being screened and collaborating in the process. Among others, pose and illumination changes are limited. To address challenges from looser restrictions, this paper proposes a novel framework for real-world face recognition in uncontrolled settings named Face Analysis for Commercial Entities (FACE). Its robustness comes from normalization (“correction”) strategies to address pose and illumination variations. In addition, two separate image quality indices quantitatively assess pose and illumination changes for each biometric query, before submitting it to the classifier. Samples with poor quality are possibly discarded or undergo a manual classification or, when possible, trigger a new capture. After such filter, template similarity for matching purposes is measured using a localized version of the image correlation index. Finally, FACE adopts reliability indices, which estimate the “acceptability” of the final identification decision made by the classifier. Experimental results show that the accuracy of FACE (in terms of recognition rate) compares favorably, and in some cases by significant margins, against popular face recognition methods. In particular, FACE is compared against SVM, incremental SVM, principal component analysis, incremental LDA, ICA, and hierarchical multiscale local binary pattern. Testing exploits data from different data sets: CelebrityDB, Labeled Faces in the Wild, SCface, and FERET. The face images used present variations in pose, expression, illumination, image quality, and resolution. Our experiments show the benefits of using image quality and reliability indices to enhance overall accuracy, on one side, and to provide for individualized processing of biometric probes for better decision-making purposes, on the other side. Both kinds of indices, owing to the way they are defined, can be easily integrated within different frameworks and off-the-shelf biometric applications for the following: 1) data fusion; 2) online identity management; and 3) interoperability. The results obtained by FACE witness a significant increase in accuracy when compared with the results produced by the other algorithms considered.

110 citations


Cites background from "Illumination invariant face recogni..."

  • ...Another approach [30] uses a ratio-image between the face image and a reference face image....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: There is a natural uncertainty principle between detection and localization performance, which are the two main goals, and with this principle a single operator shape is derived which is optimal at any scale.
Abstract: This paper describes a computational approach to edge detection. The success of the approach depends on the definition of a comprehensive set of goals for the computation of edge points. These goals must be precise enough to delimit the desired behavior of the detector while making minimal assumptions about the form of the solution. We define detection and localization criteria for a class of edges, and present mathematical forms for these criteria as functionals on the operator impulse response. A third criterion is then added to ensure that the detector has only one response to a single edge. We use the criteria in numerical optimization to derive detectors for several common image features, including step edges. On specializing the analysis to step edges, we find that there is a natural uncertainty principle between detection and localization performance, which are the two main goals. With this principle we derive a single operator shape which is optimal at any scale. The optimal detector has a simple approximate implementation in which edges are marked at maxima in gradient magnitude of a Gaussian-smoothed image. We extend this simple detector using operators of several widths to cope with different signal-to-noise ratios in the image. We present a general method, called feature synthesis, for the fine-to-coarse integration of information from operators at different scales. Finally we show that step edge detector performance improves considerably as the operator point spread function is extended along the edge.

28,073 citations

Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations

Journal ArticleDOI
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.

11,674 citations

Journal ArticleDOI
TL;DR: This work describes a method for building models by learning patterns of variability from a training set of correctly annotated images that can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes).
Abstract: !, Model-based vision is firmly established as a robust approach to recognizing and locating known rigid objects in the presence of noise, clutter, and occlusion It is more problematic to apply modelbased methods to images of objects whose appearance can vary, though a number of approaches based on the use of flexible templates have been proposed The problem with existing methods is that they sacrifice model specificity in order to accommodate variability, thereby compromising robustness during image interpretation We argue that a model should only be able to deform in ways characteristic of the class of objects it represents We describe a method for building models by learning patterns of variability from a training set of correctly annotated images These models can be used for image search in an iterative refinement algorithm analogous to that employed by Active Contour Models (Snakes) The key difference is that our Active Shape Models can only deform to fit the data in ways consistent with the training set We show several practical examples where we have built such models and used them to locate partially occluded objects in noisy, cluttered images Q 199s A&& prrss, IN

7,969 citations