scispace - formally typeset
Book ChapterDOI

Enhancing Face Recognition Under Unconstrained Background Clutter Using Color Based Segmentation

01 Jan 2016-pp 51-62

TL;DR: A way to combine 3 subspace learning algorithms, namely Eigenfaces, 2 dimensional Principal Component Analysis and Row Column 2DPCA with a color-based segmentation approach in order to boost the recognition rates under unconstrained scene conditions is proposed.

AbstractFace recognition algorithms have been extensively researched for the last 3 decades or so. Even after years of research, the algorithms developed achieve practical success only under controlled environments. Their performance usually takes a dip under unconstrained scene conditions like the presence of background clutter, non-uniform illumination etc. This paper explores the contrast in performance of standard recognition algorithms under controlled and uncontrolled environments. It proposes a way to combine 3 subspace learning algorithms, namely Eigenfaces (1DPCA), 2 dimensional Principal Component Analysis (2DPCA) and Row Column 2DPCA (RC2DPCA) with a color-based segmentation approach in order to boost the recognition rates under unconstrained scene conditions. A series of steps are performed that extract all possible facial regions from an image, following which the algorithm segregates the largest candidate for a probable face, and puts a bounding box on the blob in order to isolate only the face. It was found that the proposed algorithms, formed by the combination of such segmentation methods obtain a higher level of accuracy than the standard recognition techniques. Moreover, it serves as a general framework wherein much more robust recognition techniques could be combined to achieve boosted accuracies.

...read more


Citations
More filters
01 Jun 2017
TL;DR: In this investigation, automatic face recognition algorithms are discussed and a combination of learning algorithms with supervision are realized to address the effects of asymmetric classes and the adaptive coefficients are employed.
Abstract: In this investigation, automatic face recognition algorithms are discussed. For this purpose, a combination of learning algorithms with supervision are realized; in this way, the classification is first designed by the fuzzy-based support vector machine and then the AdaBoost meta-algorithm is applied to the designed classification to reach more accuracy and overfitting control. In the research proposed here, in order to address the effects of asymmetric classes, the adaptive coefficients are employed. In addition, to reduce the data size, the principal components analysis is also applied to the raw data. It is to note that the proposed approach is carried out in a set of images extracted from Yale University data set and its accuracy of the proposed one is verified.

Cites background from "Enhancing Face Recognition Under Un..."

  • ...grade is given to the samples by modifying the margin of the clusters [31]....

    [...]

  • ...Sampling can be sample incremet [31] - [33] sample reduction [34], [35] or combined sampling [36]....

    [...]


References
More filters
Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,128 citations

Journal ArticleDOI
TL;DR: A face recognition algorithm which is insensitive to large variation in lighting direction and facial expression is developed, based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variations in lighting and facial expressions.
Abstract: We develop a face recognition algorithm which is insensitive to large variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images of a particular face, under varying illumination but fixed pose, lie in a 3D linear subspace of the high dimensional image space-if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate from this linear subspace. Rather than explicitly modeling this deviation, we linearly project the image into a subspace in a manner which discounts those regions of the face with large deviation. Our projection method is based on Fisher's linear discriminant and produces well separated classes in a low-dimensional subspace, even under severe variation in lighting and facial expressions. The eigenface technique, another method based on linearly projecting the image space to a low dimensional subspace, has similar computational requirements. Yet, extensive experimental results demonstrate that the proposed "Fisherface" method has error rates that are lower than those of the eigenface technique for tests on the Harvard and Yale face databases.

11,674 citations

Journal ArticleDOI
TL;DR: A new method for performing a nonlinear form of principal component analysis by the use of integral operator kernel functions is proposed and experimental results on polynomial feature extraction for pattern recognition are presented.
Abstract: A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

7,611 citations

Journal ArticleDOI
TL;DR: A new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation that is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction.
Abstract: In this paper, a new technique coined two-dimensional principal component analysis (2DPCA) is developed for image representation. As opposed to PCA, 2DPCA is based on 2D image matrices rather than 1D vectors so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices, and its eigenvectors are derived for image feature extraction. To test 2DPCA and evaluate its performance, a series of experiments were performed on three face image databases: ORL, AR, and Yale face databases. The recognition rate across all trials was higher using 2DPCA than PCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2DPCA than PCA.

3,227 citations

Journal ArticleDOI
TL;DR: In this article, a method for the representation of (pictures of) faces is presented, which results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector.
Abstract: A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.

2,024 citations