scispace - formally typeset
Search or ask a question

Showing papers on "Principal component analysis published in 2002"


Journal ArticleDOI
TL;DR: Independent component analysis (ICA), a generalization of PCA, was used, using a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons, which was superior to representations based on PCA for recognizing faces across days and changes in expression.
Abstract: A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance.

2,044 citations


01 May 2002
TL;DR: PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension.
Abstract: Introduction This tutorial is designed to give the reader an understanding of Principal Components Analysis (PCA). PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension. Before getting to a description of PCA, this tutorial first introduces mathematical concepts that will be used in PCA. It covers standard deviation, covariance, eigenvec-tors and eigenvalues. This background knowledge is meant to make the PCA section very straightforward, but can be skipped if the concepts are already familiar. There are examples all the way through this tutorial that are meant to illustrate the concepts being discussed. If further information is required, the mathematics textbook " Elementary Linear Algebra 5e " ISBN 0-471-85223-6 is a good source of information regarding the mathematical background .

1,926 citations


Journal ArticleDOI
TL;DR: The equivalence of the matrices for processing, the objective functions, the optimal basis vectors, the mean-square errors, and the asymptotic connections of the three POD methods are demonstrated and proved when the methods are used to handle the POD of discrete random vectors.

682 citations


Journal ArticleDOI
TL;DR: A novel paradigm is proposed whereby data information is encapsulated in determining the structure and initial parameters of the RBF neural classifier before learning takes place, and the dimension of the search space is drastically reduced in the gradient paradigm.
Abstract: A general and efficient design approach using a radial basis function (RBF) neural classifier to cope with small training sets of high dimension, which is a problem frequently encountered in face recognition, is presented. In order to avoid overfitting and reduce the computational burden, face features are first extracted by the principal component analysis (PCA) method. Then, the resulting features are further processed by the Fisher's linear discriminant (FLD) technique to acquire lower-dimensional discriminant patterns. A novel paradigm is proposed whereby data information is encapsulated in determining the structure and initial parameters of the RBF neural classifier before learning takes place. A hybrid learning algorithm is used to train the RBF neural networks so that the dimension of the search space is drastically reduced in the gradient paradigm. Simulation results conducted on the ORL database show that the system achieves excellent performance both in terms of error rates of classification and learning efficiency.

656 citations


Journal ArticleDOI
TL;DR: Through adopting a polynomial kernel, the principal components can be computed within the space spanned by high-order correlations of input pixels making up a facial image, thereby producing a good performance.
Abstract: A kernel principal component analysis (PCA) was previously proposed as a nonlinear extension of a PCA. The basic idea is to first map the input space into a feature space via nonlinear mapping and then compute the principal components in that feature space. This article adopts the kernel PCA as a mechanism for extracting facial features. Through adopting a polynomial kernel, the principal components can be computed within the space spanned by high-order correlations of input pixels making up a facial image, thereby producing a good performance.

520 citations


Journal ArticleDOI
TL;DR: An important function of hyperspectral signal processing is to eliminate the redundancy in the spectral and spatial sample data while preserving the high-quality features needed for detection, discrimination, and classification.
Abstract: Electro-optical remote sensing involves the acquisition of information about an object or scene without coming into physical contact with it. This is achieved by exploiting the fact that the materials comprising the various objects in a scene reflect, absorb, and emit electromagnetic radiation in ways characteristic of their molecular composition and shape. If the radiation arriving at the sensor is measured at each wavelength over a sufficiently broad spectral band, the resulting spectral signature, or simply spectrum, can be used (in principle) to uniquely characterize and identify any given material. An important function of hyperspectral signal processing is to eliminate the redundancy in the spectral and spatial sample data while preserving the high-quality features needed for detection, discrimination, and classification. This dimensionality reduction is implemented in a scene-dependent (adaptive) manner and may be implemented as a distinct step in the processing or as an integral part of the overall algorithm. The most widely used algorithm for dimensionality reduction is principal component analysis (PCA) or, equivalently, Karhunen-Loeve transformation.

442 citations


01 Jan 2002
TL;DR: In this article, the use of the principal component analysis (PCA) as a preprocessing technique for the classification of hyperspectral images was studied. And the results showed that using only the first few principal component images can yield about 70 percent correct classification rate.
Abstract: The availability of hyperspectral images expands the capability of using image classification to study detailed characteristics of objects, but at a cost of having to deal with huge data sets. This work studies the use of the principal component analysis as a preprocessing technique for the classification of hyperspectral images. Two hyperspectral data sets, HYDICE and AVIRIS, were used for the study. A brief presentation of the principal component analysis approach is followed by an examination of the infor- mation contents of the principal component image bands, which revealed that only the first few bands contain significant information. The use of the first few principal component images can yield about 70 percent correct classification rate. This study suggests the benefit and efficiency of using the principal component analysis technique as a preprocessing step for the classification of hyperspectral images.

392 citations


Journal ArticleDOI
TL;DR: An analysis is presented of how long a simulation should be to obtain relevant results for global motions and reveals that the cosine content of the principal components is a good indicator for bad sampling.
Abstract: With molecular dynamics protein dynamics can be simulated in atomic detail. Current computers are not fast enough to probe all available conformations, but fluctuations around one conformation can be sampled to a reasonable extent. The motions with the largest fluctuations can be filtered out of a simulation using covariance or principal component analysis. A problem with this analysis is that random diffusion can appear as correlated motion. An analysis is presented of how long a simulation should be to obtain relevant results for global motions. The analysis reveals that the cosine content of the principal components is a good indicator for bad sampling.

375 citations


Journal ArticleDOI
Baback Moghaddam1
TL;DR: The experimental results demonstrate the simplicity, computational economy and performance superiority of the Bayesian subspace method over principal manifold techniques for visual matching.
Abstract: Investigates the use of linear and nonlinear principal manifolds for learning low-dimensional representations for visual recognition. Several leading techniques - principal component analysis (PCA), independent component analysis (ICA) and nonlinear kernel PCA (KPCA) - are examined and tested in a visual recognition experiment using 1,800+ facial images from the "FERET" (FacE REcognition Technology) database. We compare the recognition performance of nearest-neighbor matching with each principal manifold representation to that of a maximum a-posteriori (MAP) matching rule using a Bayesian similarity measure derived from dual probabilistic subspaces. The experimental results demonstrate the simplicity, computational economy and performance superiority of the Bayesian subspace method over principal manifold techniques for visual matching.

313 citations


Journal ArticleDOI
TL;DR: In this article, a multi-scale principal component analysis (MSPCA) is used for fault detection and diagnosis in chemical process monitoring and control, which is able to outperform the conventional PCA based approach in detecting and identifying real process faults in an industrial process.

309 citations


Journal ArticleDOI
TL;DR: A comparison of PCA and ICA revealed significant differences in their treatment of both structured and random noise, while PCA was superior for isolation and removal of random noise.

Journal ArticleDOI
TL;DR: In this paper, the authors provide insights into the physical interpretation of the proper orthogonal modes using the singular value decomposition (SVD) in the field of structural dynamics.

Book
01 Jan 2002
TL;DR: In this article, the authors present an overview of the state of the art in multidimensional systems and their application in the medical domain, including the use of MTS and MTGS.
Abstract: Preface. Acknowledgments. Terms and Symbols. Definitions of Mathematical and Statistical Terms. 1 Introduction. 1.1 The Goal. 1.2 The Nature of a Multidimensional System. 1.2.1 Description of Multidimensional Systems. 1.2.2 Correlations between the Variables. 1.2.3 Mahalanobis Distance. 1.2.4 Robust Engineering/ Taguchi Methods. 1.3 Multivariate DiagnosisA--The State of the Art. 1.3.1 Principal Component Analysis. 1.3.2 Discrimination and Classification Method. 1.3.3 Stepwise Regression. 1.3.4 Test of Additional Information (Rao's Test). 1.3.5 Multiple Regression. 1.3.6 Multivariate Process Control Charts. 1.3.7 Artificial Neural Networks. 1.4 Approach. 1.4.1 Classification versus Measurement. 1.4.2 Normals versus Abnormals. 1.4.3 Probabilistic versus Data Analytic. 1.4.4 Dimensionality Reduction.1.5 Refining the Solution Strategy. 1.6 Guide to This Book. 2 MTS and MTGS. 2.1 A Discussion of Mahalanobis Distance. 2.2 Objectives of MTS and MTGS. 2.2.1 Mahalanobis Distance (Inverse Matrix Method). 2.2.2 GramA-Schmidt Orthogonalization Process. 2.2.3 Proof That Equations 2.2 and 2.3 Are the Same. 2.2.4 Calculation of the Mean of the Mahalanobis Space. 2.3 Steps in MTS. 2.4 Steps in MTGS. 2.5 Discussion of Medical Diagnosis Data: Use of MTGS and MTS Methods. 2.6 Conclusions. 3 Advantages and Limitations of MTS and MTGS. 3.1 Direction of Abnormalities. 3.1.1 The GramA-Schmidt Process. 3.1.2 Identification of the Direction of Abnormals. 3.1.3 Decision Rule for Higher Dimensions. 3.2 Example of a Graduate Admission System. 3.3 Multicollinearity. 3.4 A Discussion of Partial Correlations. 3.5 Conclusions. 4 Role of Orthogonal Arrays and Signal to Noise Ratios in Multivariate Diagnosis. 4.1 Role of Orthogonal Arrays. 4.2 Role of S/ N Ratios. 4.3 Advantages of S/ N ratios. 4.3.1 S/ N Ratio as a Simple Measure to Identify Useful Variables. 4.3.2 S/ N Ratio as a Measure of Functionality of the System. 4.3.3 S/ N Ratio to Predict the Given Conditions. 4.4 Conclusions. 5 Treatment of Categorical Data in MTS/MTGS Methods. 5.1 MTS/ MTGS with Categorical Data. 5.2 A Sales and Marketing Application. 5.2.1 Selection of Suitable Variables. 5.2.2 Description of the Variables. 5.2.3 Construction of Mahalanobis Space. 5.2.4 Validation of the Measurement Scale. 5.2.5 Identification of Useful Variables (Developing Stage). 5.2.6 S/ N Ratio of the System (Before and After). 5.3 Conclusions. 6 MTS/ MTGS under a Noise Environment. 6.1 MTS/ MTGS with Noise Factors. 6.1.1 Treat Each Level of the Noise Factor Separately. 6.1.2 Include the Noise Factor as One of the Variables. 6.1.3 Combine Variables of Different Levels of the Noise Factor. 6.1.4 Do Not Consider the Noise Factor If It Cannot Be Measured. 6.2 Conclusions. 7 Determination of ThresholdsA--A Loss Function Approach. 7.1 Why Threshold Is Required in MTS/ MTGS. 7.2 Quadratic Loss Function. 7.2.1 QLF for the Nominal the Best Characteristic. 7.2.2 QLF for the Larger the Better Characteristic. 7.2.3 QLF for the Smaller the Better Characteristic. 7.3 QLF for MTS/ MTGS. 7.3.1 Determination of Threshold. 7.3.2 When Only Good Abnormals Are Present. 7.4 Examples. 7.4.1 Medical Diagnosis Case. 7.4.2 A Student Admission System. 7.5 Conclusions. 8 Standard Error of the Measurement Scale. 8.1 Why Mahalanobis Distance Is Used for Constructing the Measurement Scale. 8.2 Standard Error of the Measurement Scale. 8.3 Standard Error for the Medical Diagnosis Example. 8.4 Conclusions. 9 Advance Topics in Multivariate Diagnosis. 9.1 Multivariate Diagnosis Using the Adjoint Matrix Method. 9.1.1 Related Topics of Matrix Theory. 9.1.2 Adjoint Matrix Method for Handling Multicollinearity. 9.2 Examples for the Adjoint Matrix Method. 9.2.1 Example 1. 9.2.2 Example 2. 9.3 beta Adjustment Method for Small Correlations. 9.4 Subset Selection Using the Multiple Mahalanobis Distance Method. 9.4.1 Steps in the MMD Method. 9.4.2 Example.9.5 Selection of Mahalanobis Space from Historical Data. 9.6 Conclusions. 10 MTS/ MTGS versus Other Methods. 10.1 Principal Component Analysis. 10.2 Discrimination and Classification Method. 10.2.1 Fisher's Discriminant Function. 10.2.2 Use of Mahalanobis Distance. 10.3 Stepwise Regression. 10.4 Test of Additional Information (Rao's Test). 10.5 Multiple Regression Analysis. 10.6 Multivariate Process Control. 10.7 Artificial Neural Networks. 10.7.1 Feed Forward (Backpropagation) Method. 10.7.2 Theoretical Comparison. 10.7.3 Medical Diagnosis Data Analysis. 10.8 Conclusions. 11 Case Studies. 11.1 American Case Studies. 11.1.1 Auto Marketing Case Study. 11.1.2 Gear Motor Assembly Case Study. 11.1.3 ASQ Research Fellowship Grant Case Study. 11.1.4 Improving the Transmission Inspection System Using MTS. 11.2 Japanese Case Studies. 11.2.1 Improvement of the Utility Rate of Nitrogen While Brewing Soy Sauce. 11.2.2 Application of MTS for Measuring Oil in Water Emulsion. 11.2.3 Prediction of Fasting Plasma Glucose (FPG) from Repetitive Annual Health Checkup Data. 11.3 Conclusions.12 Concluding Remarks. 12.1 Important Points of the Proposed Methods. 12.2 Scientific Contributions from MTS/MTGS Methods. 12.3 Limitations of the Proposed Methods. 12.4 Recommendations for Future Research. Bibliography. Appendixes. A.1 ASI Data Set. A.2 Principal Component Analysis (MINITAB Output). A.3 Discriminant and Classification Analysis (MINITAB Output). A.4 Results of Stepwise Regression (MINITAB Output). A.5 Multiple Regression Analysis (MINITAB Output). A.6 Neural Network Analysis (MATLAB Output). A.7 Variables for Auto Marketing Case Study. Index.

Journal ArticleDOI
TL;DR: This work analytically constructs the principal component analysis for images of a convex Lambertian object, explicitly taking attached shadows into account, and finds the principal eigenmodes and eigenvalues with respect to lighting variability, and extends these results to the single-viewpoint case.
Abstract: We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination conditions. We analytically construct the principal component analysis for images of a convex Lambertian object, explicitly taking attached shadows into account, and find the principal eigenmodes and eigenvalues with respect to lighting variability. Our analysis makes use of an analytic formula for the irradiance in terms of spherical-harmonic coefficients of the illumination and shows, under appropriate assumptions, that the principal components or eigenvectors are identical to the spherical harmonic basis functions evaluated at the surface normal vectors. Our main contribution is in extending these results to the single-viewpoint case, showing how the principal eigenmodes and eigenvalues are affected when only a limited subset (the upper hemisphere) of normals is available and the spherical harmonics are no longer orthonormal over the restricted domain. Our results are very close, both qualitatively and quantitatively, to previous empirical observations and represent the first essentially complete theoretical explanation of these observations.

Journal ArticleDOI
TL;DR: A faster two-step algorithm that is more stable numerically is constructed and illustrated on a data set with four dimensions and on two chemometrical data sets with 1200 and 600 dimensions.

Journal ArticleDOI
TL;DR: The authors discusses the distinct purposes of PCA and EFA, using two data sets as examples to highlight the differences in results between these procedures, and also reviews the use of each technique in three major communication journals: Communication Monographs, Human Communication Research, and Communication Research.
Abstract: Exploratory factor analysis is a popular statistical technique used in communication research. Although exploratory factor analysis (EFA) and principal components analysis (PCA) are different techniques, PCA is often employed incorrectly to reveal latent constructs (i.e., factors) of observed variables, which is the purpose of EFA. PCA is more appropriate for reducing measured variables into a smaller set of variables (i.e., components) by keeping as much variance as possible out of the total variance in the measured variables. Furthermore, the popular use of varimax rotation raises some concerns about the relationships among the factors that researchers claim to discover. This paper discusses the distinct purposes of PCA and EFA, using two data sets as examples to highlight the differences in results between these procedures, and also reviews the use of each technique in three major communication journals: Communication Monographs, Human Communication Research, and Communication Research.

Journal ArticleDOI
TL;DR: In this paper, the problem of using future multivariate observations with missing data to estimate latent variable scores from an existing principal component analysis (PCA) model is addressed, and several methods for estimating the scores of new individuals with missing observations are presented.
Abstract: This paper addresses the problem of using future multivariate observations with missing data to estimate latent variable scores from an existing principal component analysis (PCA) model. This is a critical issue in multivariate statistical process control (MSPC) schemes where the process is continuously interrogated based on an underlying PCA model. We present several methods for estimating the scores of new individuals with missing data: a so-called trimmed score method (TRI), a single-component projection method (SCP), a method of projection to the model plane (PMP), a method based on the iterative imputation of missing data, a method based on the minimization of the squared prediction error (SPE), a conditional mean replacement method (CMR) and various least squared-based methods: one based on a regression on known data (KDR) and the other based on a regression on trimmed scores (TSR). The basis for each method and the expressions for the score estimators, their covariance matrices and the estimation errors are developed. Some of the methods discussed have already been proposed in the literature (SCP, PMP and CMR), some are original (TRI and TSR) and others are shown to be equivalent to methods already developed by other authors: iterative imputation and SPE methods are equivalent to PMP; KDR is equivalent to CMR. These methods can be seen as different ways to impute values for the missing variables. The efficiency of the methods is studied through simulations based on an industrial data set. The KDR method is shown to be statistically superior to the other methods, except the TSR method in which the matrix to be inverted is of a much smaller size.

Journal ArticleDOI
TL;DR: A new subspace identification algorithm is proposed that gives consistent model estimates under the errors-in-variables (EIV) situation and is demonstrated using a simulated process and a real industrial process for model identification and order determination.

Journal ArticleDOI
TL;DR: In this article, a novel principal component analysis (PCA) technique directly based on original image matrices is developed for image feature extraction, which is more powerful and efficient than conventional PCA and FLD.

01 Jan 2002
TL;DR: Experimental results on ORL face database show that the proposed IMPCA are more powerful and efficient than conventional PCA and FLD.
Abstract: The conventional principal component analysis (PCA) and Fisher linear discriminant analysis (FLD) are both based on vectors. Rather, in this paper, a novel PCA technique directly based on original image matrices is developed for image feature extraction. Experimental results on ORL face database show that the proposed IMPCA are more powerful and e:cient than conventional PCA and FLD. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

Journal ArticleDOI
TL;DR: In this paper, the combination of GC×GC with ANOVA-based feature selection was found to be a useful tool to enhance the chemical selectivity, and thus the classification power of the analytical procedure, when coupled with PCA.


Proceedings ArticleDOI
11 Aug 2002
TL;DR: This paper proposes a method that allows for simultaneous learning and recognition and shows that it can keep only the coefficients of the learned images and discard the actual images and still be able to build a model of appearance that is fast to compute and open-ended.
Abstract: The methods for visual learning that compute a space of eigenvectors by Principal Component Analysis (PCA) traditionally require a batch computation step. Since this leads to potential problems when dealing with large sets of images, several incremental methods for the computation of the eigenvectors have been introduced. However such learning cannot be considered as an on-line process, since all the images are retained until the final step of computation of space of eigenvectors, when their coefficients in this subspace are computed. In this paper we propose a method that allows for simultaneous learning and recognition. We show that we can keep only the coefficients of the learned images and discard the actual images and still are able to build a model of appearance that is fast to compute and open-ended. We performed extensive experimental testing which showed that the recognition rate and reconstruction accuracy are comparable to those obtained by the batch method.

Book ChapterDOI
TL;DR: Non-negative Matrix Factorization (NMF) technique is introduced in the context of face classification and a direct comparison with Principal Component Analysis (PCA) is also analyzed.
Abstract: The computer vision problem of face classification under several ambient and unfavorable conditions is considered in this study Changes in expression, different lighting conditions and occlusions are the relevant factors that are studied in this present contribution Non-negative Matrix Factorization (NMF) technique is introduced in the context of face classification and a direct comparison with Principal Component Analysis (PCA) is also analyzed Two leading techniques in face recognition are also considered in this study noticing that NMF is able to improve these techniques when a high dimensional feature space is used Finally, different distance metrics (L1, L2 and correlation) are evaluated in the feature space defined by NMF in order to determine the best one for this specific problem Experiments demonstrate that the correlation is the most suitable metric for this problem

Proceedings ArticleDOI
03 Nov 2002
TL;DR: In this article, a combination of principal component analysis (PCA) and multiscale pyramid decomposition (MDC) is used to estimate the local orientation of an image.
Abstract: This paper presents an image local orientation estimation method, which is based on a combination of two already well-known techniques: the principal component analysis (PCA) and the multiscale pyramid decomposition. The PCA analysis is applied to find the maximum likelihood (ML) estimate of the local orientation. The proposed technique is shown to enjoy excellent robustness against noise. We present both simulated and real image examples to demonstrate the proposed technique.

Journal ArticleDOI
TL;DR: It is clear that the use of principal components can noticeably improve the strength of DEA models.
Abstract: This research further develops the combined use of principal component analysis (PCA) and data envelopment analysis (DEA). The aim is to reduce the curse of dimensionality that occurs in DEA when there is an excessive number of inputs and outputs in relation to the number of decision-making units. Three separate PCA–DEA formulations are developed in the paper utilising the results of PCA to develop objective, assurance region type constraints on the DEA weights. The first model applies PCA to grouped data representing similar themes, such as quality or environmental measures. The second model, if needed, applies PCA to all inputs and separately to all outputs, thus further strengthening the discrimination power of DEA. The third formulation searches for a single set of global weights with which to fully rank all observations. In summary, it is clear that the use of principal components can noticeably improve the strength of DEA models.

Proceedings Article
01 Jan 2002
TL;DR: Testing on the FERET data set and using standard partitions, it is found that, when a proper distance metric is used, PCA significantly outperforms ICA on a human face recognition task, contrary to previously published results.
Abstract: Over the last ten years, face recognition has become a specialized applications area within the field of computer vision. Sophisticated commercial systems have been developed that achieve high recognition rates. Although elaborate, many of these systems include a subspace projection step and a nearest neighbor classifier. The goal of this paper is to rigorously compare two subspace projection techniques within the context of a baseline system on the face recognition task. The first technique is principal component analysis (PCA), a well-known “baseline” for projection techniques. The second technique is independent component analysis (ICA), a newer method that produces spatially localized and statistically independent basis vectors. Testing on the FERET data set (and using standard partitions), we find that, when a proper distance metric is used, PCA significantly outperforms ICA on a human face recognition task. This is contrary to previously published results.

Journal ArticleDOI
TL;DR: In this article, principal component analysis (PCA) coupled with target transformation was used to model S K-XANES spectra of humic acid samples, and compared the results with least-squares LCF.
Abstract: Quantitative application of x-ray absorption near edge structure (XANES) spectroscopy to soils and other geochemical systems requires a determination of the proportions of multiple chemical species that contribute to the measured spectrum. Two common approaches to fitting XANES spectra are spectral deconvolution and least-squares linear combination fitting (LCF). The objective of this research was to evaluate principal component analysis (PCA) coupled with target transformation to model S K-XANES spectra of humic acid samples, and to compare the results with least-squares LCF. Principal component analysis provided a statistical basis for choosing the number of standard species to include in the fitting model. Target transformation identified which standards were statistically more likely to explain the spectra of the humic acid samples. The selected standards and the scaling coefficients obtained by the PCA approach deviated by =6 mol% from results obtained by performing LCF using a large number of binary, ternary, and quaternary combinations of seven S standards. Because no energy shift is allowed in the PCA approach, fitting may be refined, when appropriate, by using afterwards a least-squares method that includes energy offset parameters. Statistical ranking of the most likely standard spectra contributing to the unknown spectra enhanced LCF by reducing the analysis to a smaller set of standard spectra. The PCA approach is a valuable complement to other spectral fitting techniques as it provides statistical criteria that improve insight to the data, and lead to a more objective approach to fitting.

Journal ArticleDOI
TL;DR: Spectral PCA and principal component analysis of the power spectra of data from chemical processes offered an improvement over PCA using the time domain signals even when time shifting was used to align the phases.

Journal ArticleDOI
TL;DR: A novel and computationally efficient method for exploratory analysis of functional MRI data is presented to reveal underlying components in the fMRI data that have maximum autocorrelation.