About: Illumination problem is a research topic. Over the lifetime, 93 publications have been published within this topic receiving 5859 citations.
Papers published on a yearly basis
01 Mar 2017
TL;DR: Experimental results have proved the proposed algorithm has outperformed the existing state of the art algorithms.
Abstract: Face recognition is a fascinating research area which has potential applications in almost every field in this world. Having said this, there is a strong need for developing a stubborn system which can overcome all problems faced in recognizing a face correctly. The flow of face recognition system is preprocessing, Feature extraction and classification. Pre-processing includes image resizing, image normalization and changing the color space of the image. Normalization helps in eliminating illumination problem. The proposed method adopts COSLET, i.e. combination of Discrete Cosine Transform (DCT) and DiscreteWavelet Transform. The features obtained is mapped on to a low dimensional subspace to retain principal components and finally classified with KNN classifier. Experimental results have proved our proposed algorithm has outperformed the existing state of the art algorithms.
••30 May 2005
TL;DR: Two multiple illumination eigenspaces-based methods, RDEB and BPNNB, are presented for solving the variable illumination problem of face recognition and it is shown that the methods have a high recognition ratio.
Abstract: This paper presents two multiple illumination eigenspaces-based methods, RDEB and BPNNB, for solving the variable illumination problem of face recognition. The experiment shows that the methods have a high recognition ratio. In particular, BPNNB has outperformed the assumptive method which knows the illumination directions of faces and completes recognition in the specific eigenspace using eigenface method for each face subset with a specific illumination direction.
01 Jan 2011
TL;DR: An adaptive skin colour classification technique, which considerably resolves around the above mentioned problems in case of illumination conditions and shadow, has been proposed and presented in this paper.
Abstract: Among the various features of human face, skin colour is a more powerful means of discerning face appearance. Numerous skin colour models, which model the human skin colours in different ways, have been proposed by researchers. Furthermore, there are a number of colour spaces which are adopted in skin colour modelling. In particular, the colour-based segmentation is a significant step in any colour-based face detection approach which uses skin-colour models to classify an image into skin and non-skin regions. Varying illumination is one of the most frequent challenges in face detection systems. A change in the light source distribution and in the illumination level (indoor, outdoor, highlights, shadows, non-white lights) affects the appearance of an object (such as human face) in a scene and produces changes in terms of object colour and shape. An adaptive skin colour classification technique, which considerably resolves around the above mentioned problems in case of illumination conditions and shadow, has been proposed and presented in this paper. The proposed method first identifies those pixels that have illumination problem using integral image and then the pixels are adjusted using an adaptive gamma intensity correction method to rectify negative effect of illumination problems. The experiments showed that the proposed method significantly improves the process of a color-based face detection system in terms of both detection rate and accuracy.
01 Nov 2017
TL;DR: This method is a combination between collaborative representation and logarithmic total variation (LTV) and is using LTV as a pre-processing step to the algorithm.
Abstract: Many algorithms for face recognition have been used in researches. Sparse representation based classification is an approach that classifies a sample with over complete dictionary. The testing can be recovered via L 1 norm minimization. A newer Approach called Collaborative representation based classification uses the same way as Sparse representative, but it recovers the solution using L 2 norm minimization. Both collaborative representation and sparse representation deal with only a small variation in pose and illumination. In this paper, we propose an approach to tackle the problem of illumination variation in collaborative representation. Our method is a combination between collaborative representation and logarithmic total variation (LTV). In this approach we are using LTV as a pre-processing step to our algorithm. LTV has made a huge impact on the result.
24 Nov 2009
TL;DR: A novel method for solving nonuniform illumination problem using multiresolution decomposition and a new technique called hillcreast-valley classification with adaptive mean filter to normalize illumination and detect dominant facial features, such as eyes, nose and mouth automatically.
Abstract: Automatic facial feature detection is one of the most important topics in computer vision and there are still many open problems that have not been solved. Nonuniform illumination is among one of those problems. This paper proposes a novel method for solving nonuniform illumination problem using multiresolution decomposition and a new technique called hillcreast-valley classification with adaptive mean filter to normalize illumination and detect dominant facial features, such as eyes, nose and mouth automatically. The proposed method is divided into three modules: eye detection, nose detection, and mouth detection modules. In this method, a single face image is divided into three regions: eye, nose, and mouth regions, then we use multiresolution decomposition to detect the eyes, and use thresholding to detect the nose and the mouth. For multiresolution decomposition, we decompose the eye region into small blocks and use hillcrest-valley classification with adaptive mean filter to classify each block as either a high or low-intensity region. Each low-intensity(valley) region is then decomposed into smaller blocks and each block is classified as either high- or low-intersity region. The low-intensity regions are then defined as the eyes. Finally the nose and the mouth are detected using thresholding. The method was evaluated on the YaleB face database that consists of face images taken by different illumination variations and the experimental results indicate that our proposed method achieves high accuracy rate.
Related Topics (5)
42.6K papers, 836.5K citations
Feature (machine learning)
33.9K papers, 798.7K citations
96.4K papers, 2.1M citations
Rendering (computer graphics)
41.3K papers, 776.5K citations
Feature (computer vision)
128.2K papers, 1.7M citations