scispace - formally typeset
Search or ask a question

Showing papers on "Dimensionality reduction published in 1986"


Book
01 May 1986
TL;DR: In this article, the authors present a graphical representation of data using Principal Component Analysis (PCA) for time series and other non-independent data, as well as a generalization and adaptation of principal component analysis.
Abstract: Introduction * Properties of Population Principal Components * Properties of Sample Principal Components * Interpreting Principal Components: Examples * Graphical Representation of Data Using Principal Components * Choosing a Subset of Principal Components or Variables * Principal Component Analysis and Factor Analysis * Principal Components in Regression Analysis * Principal Components Used with Other Multivariate Techniques * Outlier Detection, Influential Observations and Robust Estimation * Rotation and Interpretation of Principal Components * Principal Component Analysis for Time Series and Other Non-Independent Data * Principal Component Analysis for Special Types of Data * Generalizations and Adaptations of Principal Component Analysis

17,446 citations


Proceedings ArticleDOI
04 Apr 1986
TL;DR: This paper presents a new methodology based on the Fisher linear discriminant method, but for the underdetermined case; that is, for the case of having only relatively small amounts of training data for each cluster of objects.
Abstract: An important technique for object recognition in electro-optics, signal processing, and image understanding is to use a training algorithm to create a data base against which to compare data for objects being tested. The data for each training and test object is represented as a vector in a space of possible high dimension, perhaps in the hundreds or thousands. It is usually desired to project this data onto a space of much lower dimension in such a way that separation of object clusters is preserved. The difficulty with using this approach as it is usually presented is that it leads to inordinately large generalized matrix eigensystems that must be analyzed. Just as drastic is the large amount of data required for implementation. For instance, Fisher's linear discriminant method usually requires having at least as many training vectors as the dimension of the representation space. This is a severe limitation in that it would be preferable to train on reasonable amounts of data, say on samples of 20 data vectors in each class of objects. In this paper, we present a new methodology based on the Fisher linear discriminant method, but for the underdetermined case; that is, for the case of having only relatively small amounts of training data for each cluster of objects. The new algorithm is based partly on the original Fisher algorithm and partly on more recent fast algorithms for matrix factorizations. We also present examples showing application of this algorithm to the problem of automatic target recognition using images from FLIR data.

4 citations