scispace - formally typeset
Search or ask a question
Topic

Eigenface

About: Eigenface is a research topic. Over the lifetime, 2128 publications have been published within this topic receiving 110119 citations.


Papers
More filters
Proceedings ArticleDOI
12 Jan 2005
TL;DR: The intrinsic reason that 2DPCA can outperform Eigenface is because fewer feature dimensions and more samples are used in 2D PCA when compared with Eigen face, and a two-stage strategy, namely parallel image matrix compression (PIMC), is proposed to compress the image matrix redundancy, which exists among row vectors and column vectors.
Abstract: The canonical face recognition algorithm Eigenface and Fisherface are both based on one dimensional vector representation. However, with the high feature dimensions and the small training data, face recognition often suffers from the curse of dimension and the small sample problem. Recent research [4] shows that face recognition based on direct 2D matrix representation, i.e. 2DPCA, obtains better performance than that based on traditional vector representation. However, there are three questions left unresolved in the 2DPCA algorithm: I ) what is the meaning of the eigenvalue and eigenvector of the covariance matrix in 2DPCA; 2) why 2DPCA can outperform Eigenface; and 3) how to reduce the dimension after 2DPCA directly. In this paper, we analyze 2DPCA in a different view and proof that is 2DPCA actually a "localized" PCA with each row vector of an image as object. With this explanation, we discover the intrinsic reason that 2DPCA can outperform Eigenface is because fewer feature dimensions and more samples are used in 2DPCA when compared with Eigenface. To further reduce the dimension after 2DPCA, a two-stage strategy, namely parallel image matrix compression (PIMC), is proposed to compress the image matrix redundancy, which exists among row vectors and column vectors. The exhaustive experiment results demonstrate that PIMC is superior to 2DPCA and Eigenface, and PIMC+LDA outperforms 2DPC+LDA and Fisherface.

17 citations

Proceedings ArticleDOI
26 Oct 1997
TL;DR: A neural network based on image synthesis, histogram adaptive quantization and the discrete cosine transformation (DCT) for object recognition with luminance, rotation and location invariance is proposed and enjoys the additional advantage of greatly reduced computational complexity.
Abstract: We propose a neural network based on image synthesis, histogram adaptive quantization and the discrete cosine transformation (DCT) for object recognition with luminance, rotation and location invariance. An efficient representation of the invariant features is constructed using a three-dimensional memory structure. The performance of luminance and rotation invariance is illustrated by reduced error rates in face recognition. The error rate of using a two-dimensional DCT is improved from 13.6% to 2.4% with the aid of the proposed image synthesis procedure. The 2.4% error rate is better than all previously reported results using Karhunen-Loeve (1990) transform convolution networks and eigenface models. In using the DCT, our approach also enjoys the additional advantage of greatly reduced computational complexity.

16 citations

Posted Content
TL;DR: A novel face recognition method based on Principal Component Analysis (PCA) and Log-Gabor filters that is less affected by eye location errors and used image normalization method than of traditional PCA -based recognition method.
Abstract: In this article we propose a novel face recognition method based on Principal Component Analysis (PCA) and Log-Gabor filters. The main advantages of the proposed method are its simple implementation, training, and very high recognition accuracy. For recognition experiments we used 5151 face images of 1311 persons from different sets of the FERET and AR databases that allow to analyze how recognition accuracy is affected by the change of facial expressions, illumination, and aging. Recognition experiments with the FERET database (containing photographs of 1196 persons) showed that our method can achieve maximal 97-98% first one recognition rate and 0.3-0.4% Equal Error Rate. The experiments also showed that the accuracy of our method is less affected by eye location errors and used image normalization method than of traditional PCA -based recognition method.

16 citations

Proceedings ArticleDOI
22 Apr 2013
TL;DR: The paper introduces a novel framework for 3D face recognition that capitalizes on region covariance descriptors and Gaussian mixture models, and assesses the feasibility of the proposed framework on the Face Recognition Grand Challenge version 2 (FRGCv2) database with highly encouraging results.
Abstract: The paper introduces a novel framework for 3D face recognition that capitalizes on region covariance descriptors and Gaussian mixture models. The framework presents an elegant and coherent way of combining multiple facial representations, while simultaneously examining all computed representations at various levels of locality. The framework first computes a number of region covariance matrices/descriptors from different sized regions of several image representations and then adopts the unscented transform to derive low-dimensional feature vectors from the computed descriptors. By doing so, it enables computations in the Euclidean space, and makes Gaussian mixture modeling feasible. In the last step a support vector machine classification scheme is used to make a decision regarding the identity of the modeled input 3D face image. The proposed framework exhibits several desirable characteristics, such as an inherent mechanism for data fusion/integration (through the region covariance matrices), the ability to examine the facial images at different levels of locality, and the ability to integrate domain-specific prior knowledge into the modeling procedure. We assess the feasibility of the proposed framework on the Face Recognition Grand Challenge version 2 (FRGCv2) database with highly encouraging results.

16 citations

01 Jan 2005
TL;DR: In this chapter I give a personal view of the original context and motivation for the work, some of the strengths and limitations of the approach, and progress in the years since.
Abstract: Automated face recognition has a long history within the field of computer vision, and there have been several different classes of approaches to the problem. It has been about fifteen years since the “Eigenfaces” method first made an impression on the computer vision research community and helped spur interest in appearance-based recognition, biometrics and vision-based humancomputer interface. In this chapter I give a personal view of the original context and motivation for the work, some of the strengths and limitations of the approach, and progress in the years since. The original Eigenfaces approach was in many respects a reaction to the feature-based approaches to face recognition prevalent in the mid-1980s. Appearance-based approaches to recognition complement feature- or shape-based approaches, and a practical face recognition system should have elements of both. Eigenfaces should not be viewed as a general approach to recognition, but rather one tool out of many to be applied and evaluated in the appropriate context.

16 citations


Network Information
Related Topics (5)
Feature (computer vision)
128.2K papers, 1.7M citations
87% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Image segmentation
79.6K papers, 1.8M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
83% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202316
202249
202120
202043
201953
201840