scispace - formally typeset
Search or ask a question

Showing papers on "Dimensionality reduction published in 1991"


Journal ArticleDOI
TL;DR: The NLPCA method is demonstrated using time-dependent, simulated batch reaction data and shows that it successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.
Abstract: Nonlinear principal component analysis is a novel technique for multivariate data analysis, similar to the well-known method of principal component analysis. NLPCA, like PCA, is used to identify and remove correlations among problem variables as an aid to dimensionality reduction, visualization, and exploratory data analysis. While PCA identifies only linear correlations between variables, NLPCA uncovers both linear and nonlinear correlations, without restriction on the character of the nonlinearities present in the data. NLPCA operates by training a feedforward neural network to perform the identity mapping, where the network inputs are reproduced at the output layer. The network contains an internal “bottleneck” layer (containing fewer nodes than input or output layers), which forces the network to develop a compact representation of the input data, and two additional hidden layers. The NLPCA method is demonstrated using time-dependent, simulated batch reaction data. Results show that NLPCA successfully reduces dimensionality and produces a feature space map resembling the actual distribution of the underlying system parameters.

2,643 citations


Journal ArticleDOI
TL;DR: This paper proves that SV feature vector has some important properties of algebraic and geometric invariance, and insensitiveness to noise, and these properties are very useful for the description and recognition of images.

314 citations


Journal ArticleDOI
01 Mar 1991
TL;DR: Algorithms for dimensionality reduction and feature extraction and their applications as effective pattern recognizers in identifying computer users are presented and the applications of these algorithms could lead to better results in securing access to computer systems.
Abstract: Algorithms for dimensionality reduction and feature extraction and their applications as effective pattern recognizers in identifying computer users are presented. Fisher's linear discriminant technique was used for the reduction of dimensionality of the patterns. An approach for the extraction of physical features from pattern vectors is developed. This approach relies on shuffling two pattern vectors. The shuffling approach is competitive with the use of Fisher's technique in terms of speed and results. An online identification system was developed. The system was tested over a period of five weeks, used by ten participants, and in 1.17% of cases gave the error of being unable to decide. The applications of these algorithms in identifying computer users could lead to better results in securing access to computer systems. The user types a password and the system identifies not only the word but the time between each keystroke and the next. >

91 citations


Proceedings ArticleDOI
04 Nov 1991
TL;DR: A transformation matrix is derived, that makes it possible to attain the full-dimension Cramer-Rao bound also in the reduced space, and a method is devised for designing the transformation matrix.
Abstract: The computational complexity for direction-of-arrival estimation using sensor arrays increases very rapidly with the number of sensors in the array. One way to lower the amount of computations is to employ some kind of reduction of the data dimension. This is usually accomplished by employing linear transformations for mapping full-dimension data into a lower-dimensional space. In the present work, a transformation matrix is derived, that makes it possible to attain the full-dimension Cramer-Rao bound also in the reduced space. A bound on the dimension of the reduced data set is given, above which it is always possible to obtain the same accuracy for the lower-dimension estimates of the source localizations as that achievable by using the full-dimension data. Furthermore, a method is devised for designing the transformation matrix. >

32 citations


Book ChapterDOI
01 Jan 1991
TL;DR: In this article, an unsupervised neural network for dimensionality reduction which seeks directions emphasizing distinguishing features in the data is presented, and a statistical framework for the parameter estimation problem associated with this neural network is given and its connection to exploratory projection pursuit methods.
Abstract: A novel unsupervised neural network for dimensionality reduction which seeks directions emphasizing distinguishing features in the data is presented. A statistical framework for the parameter estimation problem associated with this neural network is given and its connection to exploratory projection pursuit methods is established. The network is shown to minimize a loss function (projection index) over a set of parameters, yielding an optimal decision rule under some norm. A specific projection index that favors directions possessing multimodality is presented. This leads to a similar form to the synaptic modification equations governing learning in Bienenstock, Cooper, and Munro (BCM) neurons (1982). The importance of a dimensionality reduction principal based, solely on distinguishing features, is demonstrated using a linguistically motivated phoneme recognition experiment, and compared with feature extraction using principal components and back-propagation network.

30 citations


Journal ArticleDOI
TL;DR: A new feature extraction method based on the modified “plus e -take away f” algorithm is proposed and it is shown from experimental results that the proposed method is superior to the ODV method in terms of the error probability.

22 citations


Proceedings ArticleDOI
23 Jul 1991
TL;DR: A group of multi-layer perceptron type NN's are trained to classify the security status of the power system for specific contingencies based on the pre-contingency system variables to become a viable alternative to existing computationally intensive schemes for SSA.
Abstract: A neural network (NN) for static security assessment (SSA) of a large scale power system is proposed. A group of multi-layer perceptron type NN's are trained to classify the security status of the power system for specific contingencies based on the pre-contingency system variables. Curse of dimensionality of the input data is reduced by partitioning the problem into smaller sub-problems. Better class separation and further dimensionality reduction is obtained by a feature selection scheme based on Karhunen-Loe've expansion. When each trained NN is queried on-line, it can provide the power system operator with the security status of the current operating point for a specified contingency. The parallel network architecture and the adaptive capability of the NN's are combined to achieve high speeds of execution and good classification accuracy. With the expected emergence of affordable NN hardware, this technique has the potential to become a viable alternative to existing computationally intensive schemes for SSA. >

22 citations


Journal ArticleDOI
01 Sep 1991
TL;DR: Two versions of an algorithm that combines preprocessing/prefiltering and the coherent signal subspace method in the frequency domain are developed, and comparative performance evaluations are carried out.
Abstract: The determination of the number of wideband signals impinging on a passive sensor array in the form of multiple groups (clusters), and the estimation of their directions of arrival is addressed. The idea of preprocessing by a set of orthogonal beamformers for dimensionality reduction is used to simplify wideband processing. The effectiveness of this approach is demonstrated with the use of discrete prolate spheroidal sequences (DPSSs) as the beamformers for linear uniform arrays. A weighting gain as an indicator of the improvement in the signal-to-noise ratio due to such preprocessing is introduced. Two versions of an algorithm that combines preprocessing/prefiltering and the coherent signal subspace method in the frequency domain are developed, and comparative performance evaluations are carried out. The relative effects of beamspace prefiltering on optimum and suboptimum estimators are explored. >

12 citations


Proceedings ArticleDOI
01 Aug 1991
TL;DR: A block structured neural feature extraction system is proposed whose distortion tolerance is built up gradually by successive blocks in a pipeline architecture that consists of only feedforward neural networks, allowing efficient parallel implementation.
Abstract: A block structured neural feature extraction system is proposed whose distortion tolerance is build upgradually by successive blocks in a pipeline architecture. The system consists of only feedforward neuralnetworks, allowing efficient parallel implementation.The feature extraction is based on distortion tolerant Gabor transformation and minimum distortionclustering by hierarchical self-organizing feature maps (SOFM). Due to unsupervised learning strategythere is no need for preclassified training samples or other explicit selection for training patterns duringthe training. A subspace classifier implementation on top of the feature extrator is demonstrated. The current experiments indicate that the feature space has sufficient resolution power for small number of classeswith rather strong distortions. The amount of supervised training required is very small, due to manyunsupervised stages refining the data to be suitable for classification. 1 Introduction Major stages in pattern recognition task are feature extraction and classification [7]. The feature ex-traction procedure transforms the preprocessed image into a representation that is more suitable fordistinguishing different objects. A feature can be a complex description, e.g. a graph describing con-nected line segments and textured areas, or it can be simply a vector of real numbers, each componentmeasuring some relevant quantity from the image.Recent emergence of very powerful and general neural network classifiers, like learning vector quan-tization,LVQ, [10] and multilayer perceptron, MLP, with back-propagation learning [17], has greatlysimplified construction of classifiers in pattern recognition systems. As non-parametric methods theneural classifiers require no model for class probability distributions, because they adaptively find classboundaries that correctly classify the training samples. But regardless of the alluring omnipotence of theneural classifiers, the need for efficient feature extraction still exists, since images of real world objects areoften subject to various sources of distortions, like translation, scaling, rotation, perspective distortions,variations in lighting conditions, variations in background, partial occlusions by other objects and natu-ral variations inside the classes, like human face with different expressions. All these distortions togethermake the object's class boundaries extremely complex in the gray scale space. To train a reasonablydistortion tolerant classifier to work on pure gray scale data the training samples are required to spanthe whole space of objects with all possible distortions and their combinations. Clearly collecting and

10 citations


Proceedings ArticleDOI
30 Sep 1991
TL;DR: The authors compared the proposed method on speech recognition using a continuous HMM (hidden Markov model) with the reduction method using one K-L expansion and the feature parameters of regression coefficients in addition to original static features.
Abstract: To recognize speech with dynamical features, one should use feature parameters including dynamical changing patterns, that is, time sequential patterns. The K-L expansion has been used to reduce the dimensionality of time sequential patterns. This method changes the axes of feature parameter space linearly by minimizing the error between original and reconstructed parameters. In this paper, the dimensionality of dynamical features is reduced by using one nonlinear dimensional compressing ability of the neural network. The authors compared the proposed method on speech recognition using a continuous HMM (hidden Markov model) with the reduction method using one K-L expansion and the feature parameters of regression coefficients in addition to original static features. >

2 citations


Book ChapterDOI
Asriel U. Levin1
01 Jan 1991
TL;DR: It is shown that a recurrent system of interconnected linear neurons, using Hebbian learning rule to update its connections, will converge w.p.1 to an orthogonal projection onto the space generated by the first q principal components of the p dimensional input covariance matrix.
Abstract: The problem of learning efficient representation of a high dimensional input by a lower dimensional space is investigated. It is shown that a recurrent system of interconnected linear neurons, using Hebbian learning rule to update its connections, will converge w.p.1 to an orthogonal projection onto the space generated by the first q principal components of the p dimensional input covariance matrix. The problem is analytically tractable since the input-output map is expressed by a linear equation.

Journal ArticleDOI
TL;DR: In this article, an algorithmic approach that adapts in a selected subregion of the wave-number domain is proposed, which is computationally efficient and allows a simple method to select the subregion for each array focusing point.
Abstract: In the nonstationary ocean acoustic environment, the temporal coherence of the noise field may be substantially shorter than the time required to calculate the weights in frequency‐domain adaptive beamformers with large numbers of degrees of freedom. This problem can be alleviated by adapting in a reduced dimensionality space derived from the array element space by any of several possible transformers. An algorithmic approach, which adapts in a selected subregion of the wave‐number domain, has been formulated and evaluated. This approach is computationally efficient and allows a simple method to be used to select the subregion for each array focusing point. Resolution performance has been simulated for both near‐ and far‐field sources. Estimation errors in the covariance matrix and phase mismatch between the received wave front and the local replica can both cause loss in signal gain due to the desired signal being partially rejected as noise. The effects of phase mismatch have been investigated and shown to be dependent on signal strength. A crossover in relative array gain performance between nonadaptive and adaptive beamforming is shown to occur at a signal strength which is dependent on the mismatch errors and the noise field geometry. Estimation errors in the covariance matrix are minimized by the dimensionality reduction, which reduces the degrees of freedom and hence the error for a given estimation time. Additional reduction in dimensionality is achieved by implementing a form of Owsley’s Enhanced Minimum Variance (EMV) algorithm [N. Owsley, NUSC TR 8305, 1988] in which adaptation dimensionality is further reduced by considering only the D‐largest eigenvalue of the reduced‐space covariance matrix. [Work sponsored by DARPA.]

Journal ArticleDOI
TL;DR: It is demonstrated that the nonlinear dimensionality reduction capabilities of the model by means of a component analysis experiment employ the iris data—a data set which is well known in the field of multivariate analysis.
Abstract: This paper proposes a method for concentrating multivariate data (dimensionality reduction) using a neural network model. Specifically, a pulse-input pattern-output network (PPN), i.e., a multilayer network in which the input is presented as a pulsed input (a single true input with the others being held to zero Here, by pulsed input, we mean that only one component of the input vector is presented with a value of 1 and has no connection with pulsed signals in the time domain.), is employed and the N-dimensional training pattern is presented at the outputs. Using backpropagation learning in a PPN, it is demonstrated mathematically that subject to a certain set of conditions, the dimension of the sample space can be reduced arbitrarily while preserving optimality. That capability is compared with principal component analysis (K-L expansion), and it is demonstrated that the nonlinear dimensionality reduction capabilities of the model by means of a component analysis experiment employ the iris data—a data set which is well known in the field of multivariate analysis.

01 May 1991
TL;DR: The dissertation work develops a reduced dimensionality method to provide model simplicity and site specificity in effort estimation; the method outperforms COCOMO according to Conte's criteria.
Abstract: The dissertation seeks to develop an improved method of Software Development Effort Estimation. Software development effort estimation helps in estimating the person-months for the software project and is useful in: (1) aiding the marketing function for making realistic quotes, (2) ensuring that cost estimates are realistic and reasonable profits are feasible, (3) answering the "make or buy" software decision, and (4) manpower and resource planning for software development. The dissertation addresses the problems of model complexity and portability encountered in current estimation models. The dissertation work develops a reduced dimensionality method to provide model simplicity and site specificity in effort estimation. This method is general and can be ported to various sites to yield models specific to these sites. Factor analysis is used to isolate the dimensions that underlie the independent variable space. These dimensions clarify the relationship among the variables in the independent variable space and assists the choice of a smaller set of site relevant variables. If these dimensions (by definition, few in number) can be used to generate an effort estimate comparable to the estimate derived from the variables from which these dimensions originate, model simplicity is achieved. As these dimensions are not correlated (or have low correlation), the problem of multicollinearity often present in current models is avoided. Site specificity is achieved by the process of choosing measures for these dimensions. Using a subset of possible options for arriving at these measures, a multivariate regression approach was used to generate models on a sizeable portion of the COCOMO data base. A testing mechanism is used to subject the models obtained from this method to the test criteria recommended in the literature. The accuracy of the estimates obtained from these models were compared to the estimates obtained from the COCOMO basic, intermediate and detailed models on Conte's criteria. The testing supports the merit of the reduced dimensionality method; the method outperforms COCOMO according to Conte's criteria. Further, discriminant analysis is used to predict the ability of the dimensionality reduction method to estimate a given project within 20% of the actual in either (over or under) direction.

Journal ArticleDOI
Lixing Zhu1, Hongzhi An1
TL;DR: An inverse regression method is proposed to seek the interesting projective direction, the minimization of the residual sum of squares is used as a criterion, and spline functions are applied to approximate the general nonlinear transform function.
Abstract: In order to explore the nonlinear structure hidden in high-dimensional data, some dimension reduction techniques have been developed, such as the Projection Pursuit technique (PP). However, PP will involve enormous computational load. To overcome this, an inverse regression method is proposed. In this paper, we discuss and develop this method. To seek the interesting projective direction, the minimization of the residual sum of squares is used as a criterion, and spline functions are applied to approximate the general nonlinear transform function. The algorithm is simple, and saves the computational load. Under certain proper conditions, consistency of the estimators of the interesting direction is shown.