scispace - formally typeset
Search or ask a question

Showing papers on "Eigenface published in 2011"


Proceedings ArticleDOI
11 Mar 2011
TL;DR: The paper presents a methodology for face recognition based on information theory approach of coding and decoding the face image using Principle Component Analysis and recognition using the feed forward back propagation Neural Network.
Abstract: Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The paper presents a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed methodology is connection of two stages - Feature extraction using Principle Component Analysis and recognition using the feed forward back propagation Neural Network. The goal is to implement the system (model) for a particular face and distinguish it from a large number of stored faces with some real-time variations as well. The Eigenface approach uses Principal Component Analysis (PCA) algorithm for the recognition of the images. It gives us efficient way to find the lower dimensional space.

1,727 citations


Journal ArticleDOI
TL;DR: Under a spiked covariance model, a new iterative thresholding approach for estimating principal subspaces in the setting where the leading eigenvectors are sparse is proposed and it is found that the new approach recovers the principal subspace and leading eignevectors consistently, and even optimally, in a range of high-dimensional sparse settings.
Abstract: Principal component analysis (PCA) is a classical dimension reduction method which projects data onto the principal subspace spanned by the leading eigenvectors of the covariance matrix. However, it behaves poorly when the number of features p is comparable to, or even much larger than, the sample size n. In this paper, we propose a new iterative thresholding approach for estimating principal subspaces in the setting where the leading eigenvectors are sparse. Under a spiked covariance model, we find that the new approach recovers the principal subspace and leading eigenvectors consistently, and even optimally, in a range of high-dimensional sparse settings. Simulated examples also demonstrate its competitive performance.

267 citations


Book ChapterDOI
01 Jan 2011
TL;DR: This chapter describes in roughly chronologic order techniques that identify, parameterize, and analyze linear and nonlinear subspaces, from the original Eigenfaces technique to the recently introduced Bayesian method for probabilistic similarity analysis.
Abstract: Images of faces, represented as high-dimensional pixel arrays, often belong to a manifold of intrinsically low dimension. Face recognition, and computer vision research in general, has witnessed a growing interest in techniques that capitalize on this observation and apply algebraic and statistical tools for extraction and analysis of the underlying manifold. In this chapter, we describe in roughly chronologic order techniques that identify, parameterize, and analyze linear and nonlinear subspaces, from the original Eigenfaces technique to the recently introduced Bayesian method for probabilistic similarity analysis. We also discuss comparative experimental evaluation of some of these techniques as well as practical issues related to the application of subspace methods for varying pose, illumination, and expression.

245 citations


Book
28 Jul 2011
TL;DR: This chapter discusses Morphable Models for Training a Component-Based Face Recognition System, a Unified Approach for Analysis and Synthesis of Images and Multimodal Biometrics: Augmenting Face with Other Cues.
Abstract: Preface PART I: THE BASICS Chapter 1: A Guided Tour for Face Processing Chapter 2: Eigenface and Beyond Chapter 3: Statistical Evaluation of Face Recognition Systems PART II: FACE MODELING Chapter 4: 3D Morphable Face Model: A Unified Approach for Analysis and Synthesis of Images Chapter 5: Expression-Invariant Three-Dimensional Face Recognition Chapter 6: 3D Face Modeling from Monocular Video Sequences Chapter 7: Face Modeling by Information Maximization Chapter 8: Face Recognition by Human Chapter 9: Predicting Face Recognition Success for Humans Chapter 10: Distributed Representation of Faces and Objects PART III: ADVANCED METHODS Chapter 11: On the Effect of Illumination and Face Recognition Chapter 12: Modeling Illumination Variation with Spherical Harmonics Chapter 13: Multi-Subregion Based Probabilistic Approach Toward Pose-Invariant Face Recognition Chapter 14:Morphable Models for Training a Component-Based Face Recognition System Chapter 15: Model-Based Face Modeling and Tracking with Application to Video Conferencing Chapter 16: 3D and MultiModal 3D & 2D Face Recognition Chapter 17: Beyond One Still Image: Face Recognition from multiple Still Images or Video Sequence Chapter 18: Subset Modeling of Face Localization Error, Occlusion and Expression Chapter 19: Real-Time Robust Face and Facial Features Detection with Information-Based Maximum Discrimination Chapter 20: Current Landscape of Thermal Infrared Face Recognition Chapter 21: Multimodal Biometrics: Augmenting Face with Other Cues

184 citations


Journal ArticleDOI
TL;DR: A machine vision algorithm was developed to detect and count immature green citrus fruits in natural canopies using color images and a novel 'eigenfruit' approach (inspired by the ' eigenface' face detection and recognition method) was used for green citrus detection.

120 citations


Journal ArticleDOI
TL;DR: A homomorphic filtering-based illumination normalization method that is simple and computationally fast because there are mature and fast algorithms for the Fourier transform used in homomorphic filter and the Eigenfaces method is chosen to recognize the normalized face images.

118 citations


Journal ArticleDOI
Wonjun Hwang1, Haitao Wang1, Hyun-Woo Kim, Seok-Cheol Kee2, Junmo Kim3 
TL;DR: The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations that shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
Abstract: The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an “integral normalized gradient image,” by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.

68 citations


Journal ArticleDOI
01 May 2011
TL;DR: In this article, a feature selection algorithm based on a multiobjective genetic algorithm to analyze and discard irrelevant coefficients offers a solution that considerably reduces the number of coefficients, while also improving recognition rates.
Abstract: Although it shows enormous potential as a feature extractor, 2D principal component analysis produces numerous coefficients. Using a feature-selection algorithm based on a multiobjective genetic algorithm to analyze and discard irrelevant coefficients offers a solution that considerably reduces the number of coefficients, while also improving recognition rates.

61 citations


Journal ArticleDOI
01 Jun 2011
TL;DR: An algorithm named M2SA is proposed, which inherits the ability of the MSA to decompose the interlaced semantic factors, and it does not depend on any assumptions on the data distribution; and it can deal with a high percentage of missing values.
Abstract: Multilinear subspace analysis (MSA) is a promising methodology for pattern-recognition problems due to its ability in decomposing the data formed from the interaction of multiple factors. The MSA requires a large training set, which is well organized in a single tensor, which consists of data samples with all possible combinations of the contributory factors. However, such a “complete” training set is difficult (or impossible) to obtain in many real applications. The missing-value problem is therefore crucial to the practicality of the MSA but has been hardly investigated up to present. To solve the problem, this paper proposes an algorithm named M2SA, which is advantageous in real applications due to the following: 1) it inherits the ability of the MSA to decompose the interlaced semantic factors; 2) it does not depend on any assumptions on the data distribution; and 3) it can deal with a high percentage of missing values. M2SA is evaluated by face image modeling on two typical multifactorial applications, i.e., face recognition and facial age estimation. Experimental results show the effectiveness of M2 SA even when the majority of the values in the training tensor are missing.

61 citations


Journal ArticleDOI
TL;DR: It is observed that if less number of individuals are supposed to be recognized then the recognition accuracy increases, and alignment of the facial images increases recognition accuracy; and expression and pose have minimal effect on the recognition rate while illumination has great impact on the recognised accuracy.
Abstract: In this work, we use the PCA based eigenface method to build a face recognition system that have recognition accuracy more than 97% for the ORL database and 100% for the CMU databases. However, the main goal of this research is to identify the characteristics of eigenface based face recognition while, (1) the number of eigenface features or signatures in the training and test data is varied; (2) the amount of noise in the training and test data is varied; (3) the level of blurriness in the training and test data is varied; (4) the image size in the training and test data is varied; (5) the variations in facial expression, pose and illumination are incorporated in the training and test data; and (6) different databases with different characteristic for example with aligned images and non-aligned images, bright and dark image are used. We have observed that, (1) in general the increase of the number of signatures on images increases the recognition rate, however, the recognition rate saturates after a certain amount of increase; (2) the increase in the number of samples used in the calculation of covariance matrix in the PCA increases the recognition accuracy for a given number of individuals to identify; (3) the increase in noise and blurriness have different affect on the recognition accuracy; (4) the reduction in image-size has very minimal effect on the recognition accuracy; (5) if less number of individuals are supposed to be recognized then the recognition accuracy increases; (6) alignment of the facial images increases recognition accuracy; and (7) expression and pose have minimal effect on the recognition rate while illumination has great impact on the recognition accuracy.

58 citations


Journal ArticleDOI
TL;DR: A new face recognition algorithm based on Renyi entropy component analysis is reported, where kernel-based methodology is integrated with entropy analysis to choose the best principal component vectors that are subsequently used for pattern projection to a lower-dimensional space.

Proceedings ArticleDOI
27 May 2011
TL;DR: The experimental results have shown that the developed system has achieved good performance of face recognition rate of 80% at the distance of camera and subject between 40 cm to 60 cm and the subject's orientation head angle must be within the range of-20 to +20 degrees.
Abstract: The security currently become a very important issue in public or private institutions in which various security systems have been proposed and developed for some crucial processes such as person identifications, verification or recognition especially for building access control, suspect identifications by the police, driver licenses and many others. Face recognitions have been an active area of research with numerous applications since late 1980s and become one of the important elements in security system development. This paper focuses on the study and development on an automated face recognition system with the potential application for office door access control. The technique of eigenfaces based on the principle component analysis (PCA) and artificial neural networks have been applied into the system. The study includes the analysis of the influences of three main factors of face recognition namely illumination, distance and subject's head orientation on the developed face recognition system purposely built for office door access control. The experimental results have shown that the developed system has achieved good performance of face recognition rate of 80% at the distance of camera and subject between 40 cm to 60 cm and the subject's orientation head angle must be within the range of-20 to +20 degrees.

Proceedings ArticleDOI
23 Mar 2011
TL;DR: Experimental results demonstrate the effectiveness of this research on an efficient illumination-invariant face recognition system using Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA).
Abstract: Face recognition technology has evolved as a popular identification technique to perform verification of human identity. By using the feature extraction methods and dimensionality reduction techniques in the pattern recognition applications, a number of facial recognition systems has been produced with distinct measure of success. Various face recognition algorithms and their extensions, have been proposed in the past three decades. However, face recognition faces challenging problems in real life applications because of the variation in the illumination of the face images. In the recent years, the research is focused towards Illumination-invariant face recognition system and many approaches have been proposed. But, there are several issues in face recognition across illumination variation which still remains unsolved. This paper provides a research on an efficient illumination-invariant face recognition system using Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). For processing the illumination invariant image, low frequency components of DCT are used to normalize the illuminated image, odd and even components of DCT is used for compensation in illumination variation and PCA is used for recognition of face images. The existing approaches in illumination Invariant face recognition are comprehensively reviewed and discussed. The proposed approach is validated with Yale Face Database B. Experimental results demonstrate the effectiveness of this approach in the performance of face recognition.

Journal ArticleDOI
TL;DR: A modified two-dimension principal component analysis (2DPCA) and bidirectional principal component analyzed methods based on quaternion matrix are proposed to recognize and reconstruct a color face image.

Proceedings ArticleDOI
01 May 2011
TL;DR: This system provides an end-to-end solution for face recognition, it receives video input from a camera, detects the locations of the face(s) using the Viola-Jones algorithm, subsequently recognizes each face using the Eigenface algorithm, and outputs the results to a display.
Abstract: Face recognition systems play a vital role in many applications including surveillance, biometrics and security. In this work, we present a {\textit complete} real-time face recognition system consisting of a face detection, a recognition and a down sampling module using an FPGA. Our system provides an end-to-end solution for face recognition, it receives video input from a camera, detects the locations of the face(s) using the Viola-Jones algorithm, subsequently recognizes each face using the Eigenface algorithm, and outputs the results to a display. Experimental results show that our complete face recognition system operates at 45 frames per second on a Virtex-5 FPGA.

Journal ArticleDOI
TL;DR: An efficient self-configurable systolic architecture is proposed in this paper for very large scale integration implementation of a face recognition system that applies principal component neural network (PCNN) with generalized Hebbian learning for extracting eigenfaces from the face database.
Abstract: An efficient self-configurable systolic architecture is proposed in this paper for very large scale integration implementation of a face recognition system. The proposed system applies principal component neural network (PCNN) with generalized Hebbian learning for extracting eigenfaces from the face database. It demonstrates a recognition performance of more than 85% when evaluated on the benchmark Yale and FRGC databases containing images with varying illumination and expression. Unlike the existing face recognition systems, the proposed approach not only recognizes the faces using computed eigenfaces, but also updates eigenfaces automatically whenever the face database changes. The challenge, however, lies in hardware realization of the PCNN-based face recognition system. In the presence of computation-intensive steps of varying nature, it is not straightforward to map the overall computation to a single systolic architecture. A primary contribution of this paper from the architecture point of view is an optimized mapping of fine-grained systolized signal flow graphs (SFGs) for each individual step of the algorithm on to a single self-configurable linear systolic array by appropriate merging of the computations pertaining to different nodes of different SFGs. The architecture has the flexibility of processing face images and databases of any size and it is easily scalable with the number of eigenfaces to be computed. The proposed PCNN-based systolic face recognition system has been implemented and evaluated on a Xilinx ML403 evaluation platform with Virtex-4 XC4VFX12 FPGA. The FPGA-based design for a reasonably large-sized face database can process more than 400 faces in a video image frame which is fast enough for video surveillance in busy public places and sensitive locations.

Journal Article
TL;DR: A comparative study of three most recently methods for face recognition, one of the approach is eigenface, fisherfaces and other one is the elastic bunch graph matching.
Abstract: The technology of face recognition has become mature within these few years System, using the face recognition, has become true in real life In this paper, we will have a comparative study of three most recently methods for face recognition One of the approach is eigenface, fisherfaces and other one is the elastic bunch graph matching After the implementation of the above three methods, we learn the advantages and Disadvantages of each approach and the difficulties for the implementation

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This work presents new symmetry scores of the face and uses the scores to compare the symmetry in several subgroups of a face database to find a significant difference in face symmetry between the men and women subjects in the database.
Abstract: Recent research in the area of automatic machine recognition of human faces has shown that there may be an advantage in utilizing face symmetry to improve recognition accuracy. While promising, this work has led to several open questions. What is a good feature description or score of the symmetry of the face? Is there a statistical significance between face symmetry and face recognition? We present new symmetry scores of the face and use the scores to compare the symmetry in several subgroups of a face database. A 3D face database is used to remove the effects of illumination which should improve the reliability of the symmetry score. We find a significant difference in face symmetry between the men and women subjects in the database. The database is then partitioned into most symmetric and least symmetric subjects based on the symmetry scores. The average-half-face is utilized in our face recognition experiments to take into account the symmetry of the face. Face recognition with eigenfaces using the average-half-face is significantly higher than using the full face in all subgroups regardless of symmetry score. However, face recognition using the full face does depend on the symmetry score and generally favors the least symmetric subjects.

Journal ArticleDOI
TL;DR: Suitable methods are proposed for a quantitative metrological characterization of face measurement systems, on which recognition procedures are based, and are applied to three different algorithms based either on linear discrimination, on eigenface analysis, or on feature detection.
Abstract: Security systems based on face recognition through video surveillance systems deserve great interest. Their use is important in several areas including airport security, identification of individuals and access control to critical areas. These systems are based either on the measurement of details of a human face or on a global approach whereby faces are considered as a whole. The recognition is then performed by comparing the measured parameters with reference values stored in a database. The result of this comparison is not deterministic because measurement results are affected by uncertainty due to random variations and/or to systematic effects. In these circumstances the recognition of a face is subject to the risk of a faulty decision. Therefore, a proper metrological characterization is needed to improve the performance of such systems. Suitable methods are proposed for a quantitative metrological characterization of face measurement systems, on which recognition procedures are based. The proposed methods are applied to three different algorithms based either on linear discrimination, on eigenface analysis, or on feature detection.

Journal Article
TL;DR: This work rates the face recognition problem as an intrinsically two-dimensional (2D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2D characteristic views.
Abstract: Our approach rates the face recognition problem as an intrinsically two-dimensional (2D) recognition problem ratherthan requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2D characteristic views. The system functions by Projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as “eigenfaces”, because they are the eigenvectors (Principle components) of the set of faces. They do not necessarily correspond to the features such as eyes, ears and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features and so to recognize a Particular face it is necessary only to compare these weights to those of known individuals.

Journal ArticleDOI
TL;DR: A novel method for image feature extraction, namely the two-dimensional local graph embedding, which is based on maximum margin criterion and thus not necessary to convert the image matrix into high-dimensional image vector and directly avoid computing the inverse matrix in the discriminant criterion is proposed.

Proceedings ArticleDOI
22 Dec 2011
TL;DR: A new approach for recognizing the face of a person considering the expression of the same human face at different instances of time is presented by combining principle component analysis (PCA) for feature extraction and minimum distance classifier (MDC) for classification.
Abstract: Facial expressions convey non-verbal cues, which play an important role in interpersonal relations. Automatic recognition of human face based on facial expression can be an important component of natural human-machine interface. It may also be used in behavioral science. Although human being can recognize the face practically without any effort, but reliable face recognition by machine is a challenge. This paper presents a new approach for recognizing the face of a person considering the expression of the same human face at different instances of time. This methodology is developed by combining principle component analysis (PCA) for feature extraction and minimum distance classifier (MDC) for classification. Experiment is done on AT&T dataset and the recognition rate achieves to 96.7% for different facial expressions.

Proceedings Article
01 Aug 2011
TL;DR: This paper presents a technique for the identification of great apes, in particular chimpanzees, using state-of-the-art algorithms for human face recognition in combination with several classification schemes.
Abstract: In recent years, thousands of species populations declined catastrophically leaving many species on the brink of extinction. Several biological studies have shown that especially primates like chimpanzees and gorillas are threatened. An essential part of effective biodiversity conservation management is population monitoring using remote camera devices. However, due to the large amount of data, the manual analysis of video recordings is extremely time consuming and highly cost intensive. Consequently, there is a high demand for automatic analytical routine procedures using computer vision techniques to overcome this issue. In this paper we present a technique for the identification of great apes, in particular chimpanzees, using state-of-the-art algorithms for human face recognition in combination with several classification schemes. For benchmark purposes we provide a publicly available dataset of captive chimpanzees. In our experiments we applied several common techniques like the well known Eigenfaces, Fisherfaces, Laplacianfaces and Randomfaces approaches to identify individuals. We compare all of these methods in combination with the classification approaches Nearest Neighbor (NN), Support Vector Machine (SVM) and a new concept for face recognition, Sparse Representation Classification (SRC) based on Compressive Sensing (CS).

Proceedings ArticleDOI
16 Apr 2011
TL;DR: The experimental results show the feasibility and real-time performance of the system, when using the train library as the test library, classification rate was 100%.
Abstract: Vehicle classification is a hard task in ITS. A real-time vehicle classification method based on eigenface is proposed, it includes two main steps: training and classification. In the training step, first, using the time average image approach to obtain and update the background model, and then, using the background difference approach to detect and extract the outline of a moving vehicle, furthermore determine the left, right and bottom border of a vehicle face according to the left, right and bottom border of the vehicle outline, next, the height of a vehicle face is set to a fixed empirical height. After normalization and some other necessary preprocessing steps, the vehicle face image library is built, At last, the eigenvectors of a vehicle face image by using eigenface method are extracted and the vehicle face feature library is constructed using these eigenvectors. In the classification step, first, a vehicle face image and its eigenvector are extracted by the above ways, and then compare the difference between the vehicle face eigenvector and the eigenvectors in feature library using the minimum distance method. The experimental platform is built on OpenCV and Visual C++, the vehicle face feature library is constructed using 100 vehicles face. In the experiment, when the size of vehicle face image is 80∗30 pixels, the average time of one vehicle classification is about 1.88 ms, when the size of vehicle face image is 120∗50 and 322∗131, the time is about 3.28ms and 30.69ms respectively. The experimental results show the feasibility and real-time performance of the system, when using the train library as the test library, classification rate was 100%.

Journal ArticleDOI
TL;DR: This paper proposes a novel supervised learning algorithm for generating face relevance maps to improve the discriminating capability of existing methods and successfully applied the developed technique to face identification based on the Eigenfaces and Fisherfaces methods.

Journal ArticleDOI
TL;DR: An approach to face recognition is presented, which based on multi-level transfer function quantum neural networks (QNN) and multi-layer classifiers shows that the identification method in more complex environments with a certain degree of robustness and effective and feasible.

Journal Article
TL;DR: The proposed approach essentially was to implement and verify the algorithm Eigenfaces for Recognition, which solves the recognition problem for two dimensional representations of faces, using the principal component analysis.
Abstract: A real-time system for recognizing faces in a video stream provided by a surveillance camera was implemented, having real-time face detection. Thus, both face detection and face recognition techniques are summary presented, without skipping the important technical aspects. The proposed approach essentially was to implement and verify the algorithm Eigenfaces for Recognition, which solves the recognition problem for two dimensional representations of faces, using the principal component analysis. The snapshots, representing input images for the proposed system, are projected in to a face space (feature space) which best defines the variation for the face images training set. The face space is defined by the ‘eigenfaces’ which are the eigenvectors of the set of faces. These eigenfaces contribute in face reconstruction of a new face image projected onto face space with a meaningful (named weight).The projection of the new image in this feature space is then compared to the available projections of training set to identify the person using the Euclidian distance. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions.

Proceedings ArticleDOI
11 Apr 2011
TL;DR: The results show that genetic-based feature selection reduces the number of features needed by approximately 50% while improving the identification accuracy over the baseline, which significantly improves the accuracy of the PCA-based face recognition.
Abstract: In this paper, we have performed an evaluation of genetic-based feature selection and weighting on the PCA-based face recognition. This work highlights the first attempt of applying Genetic Algorithm (GA) based feature selection on the Eigenface method. The results show that genetic-based feature selection reduces the number of features needed by approximately 50% while improving the identification accuracy over the baseline. Genetic-based feature weighting significantly improves the accuracy from an 87.14% to a 92.5% correct recognition rate.

Proceedings ArticleDOI
11 Oct 2011
TL;DR: A comparison of several classical as well as recent state-of-the-art face recognition methods using one standard face database and two databases of ship images collected from satellite imagery is presented.
Abstract: Face recognition research has gained significant interest in recent years which has resulted in the development of many state-of-the-art methods However, it is not well-known how domain specific these methods are to the problem of face recognition Could these algorithms be used to classify and identify other objects, such as ships seen from electro-optical satellite imagery? Face recognition research shares many of the same challenges with many other types of classification research, such as illumination, pose and resolution variation Therefore, a study of this type is warranted We present a comparison of several classical (eg eigenfaces and fisherfaces) as well as recent state-of-the-art face recognition methods (eg sparse representation and local binary patterns) using one standard face database and two databases of ship images collected from satellite imagery An analysis of these results as well as future directions conclude the paper

Journal ArticleDOI
01 Mar 2011
TL;DR: A new technique called structural two-dimensional principal component analysis (S2DPCA) is proposed for image recognition that identifies the structural information for discrimination contained in both within-row and between-row of the images.
Abstract: In this paper, a new technique called structural two-dimensional principal component analysis (S2DPCA) is proposed for image recognition. S2DPCA is a subspace learning method that identifies the structural information for discrimination. Different from conventional two-dimensional principal component analysis (2DPCA) that only reflects within-row information of images, the goal of S2DPCA is to discover structural discriminative information contained in both within-row and between-row of the images. By contrast with 2DPCA, S2DPCA is directly based on the augmented images encoding corresponding row membership, and the projection directions of S2DPCA are obtained by solving an eigenvalue problem of the augmented image covariance matrix. Computationally, S2DPCA is straightforward and comparative with 2DPCA. Like 2DPCA, the singularity problem is completely avoided in S2DPCA. Experiments on face recognition and handwritten digit recognition are presented to show the effectiveness of the proposed approach.