scispace - formally typeset
Search or ask a question

Showing papers on "Feature selection published in 1970"


Journal ArticleDOI
TL;DR: A method is developed herein to use the Karhunen-Loeve expansion to extract features relevant to classification of a sample taken from one of two pattern classes.
Abstract: The Karhunen-Lo6ve expansion has been used previously to extract important features for representing samples taken from a given distribution. A method is developed herein to use the Karhunen-Loeve expansion to extract features relevant to classification of a sample taken from one of two pattern classes. Numerical examples are presented to illustrate the technique.

562 citations


Journal ArticleDOI
TL;DR: Computer similation results are presented and compared and feature selection techniques discussed include 1) information theoretic approach, 2) direct estimation of error probability, 3) feature-space transformation, and 4) approach of using stochastic automata model.
Abstract: The problem of feature selection in pattern recognition is briefly reviewed. Feature selection techniques discussed include 1) information theoretic approach, 2) direct estimation of error probability, 3) feature-space transformation, and 4) approach of using stochastic automata model. These techniques are applied to the selection of features in the crop classification problem. Computer similation results are presented and compared.

102 citations


Journal ArticleDOI
TL;DR: The authors suggests that Mantel exaggerates the advantages of the backward elimination or %tepdown procedure, and suggests that the reverse elimination procedure is more efficient than the % tepdown method.
Abstract: Mantel (1970) has pointed out that many procedures are now available for selecting variables in multiple regression analyses. This note reviews the more important ones briefly, and suggests that Mantel exaggerates the advantages of the backward elimination or %tepdown” procedure.

47 citations



Journal ArticleDOI
TL;DR: The proposed feature-selection procedure has the unique property of designing a pattern classifier under a single-performance criterion instead of the conventional division of receptor and categorizer, and enables the system to come closest to the minimum-risk ideal classifier.
Abstract: A feature-selection procedure is proposed for the class of distribution-free pattern classifiers [1], [2]. The selection procedure can be readily carried out on fixed (large) training samples using matrix inversion. If direct matrix inversion is to be avoided, the approximation method [4] or the stochastic-approximation procedure [2] can be applied to the training samples. The proposed procedure, aside from furnishing a statistical interpretation, has a mapping interpretation. It has the unique property of designing a pattern classifier under a single-performance criterion instead of the conventional division of receptor and categorizer. It enables the system to come closest to the minimum-risk ideal classifier. In particular, for two-class problems having normal distributions with equal covariance matrices, equal costs for misrecognition, and equal a priori probabilities, the proposed procedure yields the optimum Bayes procedure without the knowledge of the class distributions. Furthermore, the proposed feature-selection procedure is the same as that of the divergence computation. Experimental results are presented. They are considered satisfactory.

18 citations


Book ChapterDOI
TL;DR: This chapter discusses several recent developments in the application of statistical techniques to feature selection, feature ordering, mode estimation, and pattern classification and presents several Bayesian estimation techniques used in the estimation of unknown parameters of a probability density function.
Abstract: Publisher Summary A pattern recognition system consists of two parts, namely—feature extractor and classifier. The function of feature extractor is to extract or to measure the important characteristics from the input patterns. The extracted characteristics are called features and they are supposed to best characterize all the possible input patterns. This chapter discusses several recent developments in the application of statistical techniques to feature selection, feature ordering, mode estimation, and pattern classification. It also presents the formulation of the problem of pattern classification as a statistical decision problem. The sequential decision model can be used for pattern classification. Forward and backward computational procedures are used for sequential classification systems. Feature selection and ordering problems can be studied in terms of the information theoretic approach and the generalized Karhuman–Lohve expansion. The chapter also presents several Bayesian estimation techniques used in the estimation of unknown parameters of a probability density (or distribution) function.

9 citations


Journal ArticleDOI
TL;DR: This work uses the wrapper approach for Feature Subset Selection from MCC++ library to run experiments with datasets containing many features and results confirm the superiority of FSS wrapper approach but in some cases the computational cost is excessive.
Abstract: Machine learning methods provide algorithms for mining databases in order to help analyze the information, find patterns, and improve prediction accuracy. In practice, the user of a data mining tool is interested in accuracy, efficiency, and comprehensibility for a specific domain which may be reached through feature selection. In this work we use the wrapper approach for Feature Subset Selection. The FSS algorithm from MCC++ library was used to run experiments with datasets containing many features. Accuracies for five inducers using all features, features found by FSS as well as the union of all those selected features are presented. Results confirm the superiority of FSS wrapper approach but in some cases the computational cost is excessive.

3 citations


01 Jan 1970
TL;DR: It is shown that the properties of the proposed technique can be applied to supervised clustering, where samples are classified into twoclasses without apriori knowledge of the class.
Abstract: The Karhunen-Lo6ve expansion hasbeenusedpre- viously toextract important features forrepresenting samples taken fromagiven distribution. A methodisdeveloped herein tousethe Karhunen-Loeve expansion toextract features relevant toclassifica- tion ofasample takenfromoneoftwopattern classes. Numerical examples arepresented toillustrate thetechnique. Also, itisshownthat theproperties oftheproposed technique can beapplied tounsupervised clustering, wheregivensamples are classified into twoclasses without apriori knowledge oftheclass. IndexTerms-Clustering, feature extraction, feature selection, Karhunen-Loeve expansion, pattern recognition, unsupervised learn- ing.

3 citations


Journal ArticleDOI
TL;DR: A modification of the conventional mutual-information figure of merit for feature selection in pattern recognition is described, which results in greater recognition accuracy for a linear classifier.
Abstract: A modification of the conventional mutual-information figure of merit for feature selection in pattern recognition is described. A weighting function is combined with the mutual-information function that results in greater recognition accuracy for a linear classifier.

3 citations


01 Mar 1970
TL;DR: It is proposed that the consistency of feature selection, both within and between patterns, is a function of both the informational properties of the feature and the population of features present in the patterns.
Abstract: : A feature analytic process is proposed as a basic mechanism in the encoding and storage of visual shapes by humans It is hypothesized that local features, encoded as feature prototypes plus deviations, are stored in memory according to their positional relationships in the pattern Two studies explored methodologies for the study of feature selection and attempted to determine if humans would agree in their selection of features Inspection of the results showed that humans do agree in their selection of features and tend to repeat the selection of similar visual configurations as features across patterns It is proposed that the consistency of feature selection, both within and between patterns, is a function of both the informational properties of the feature and the population of features present in the patterns (Author)

2 citations



Journal ArticleDOI
TL;DR: To generalize the method uses both neural network self-organizing feature mapping and neural network supervised learning to classify waves according to patient age.
Abstract: Pattern recognition techniques, such as clustering algorithms, are applied to recordings of arterial distension waveforms to detect emergent properties of data. The feature extraction stage is based on the Fast Fourier Transform components analysis. Statistical K-means clustering helps in the feature selection step.To generalize the method uses both neural network self-organizing feature mapping and neural network supervised learning to classify waves according to patient age. This process shows encouraging results for a set of blood pressure recordings belonging to three differents decades.

Proceedings ArticleDOI
01 Dec 1970
TL;DR: This paper presents some preliminary results on minimax feature extraction and selection using the Bhattacharyya coefficient as the selection criterion.
Abstract: This paper presents some preliminary results on minimax feature extraction and selection using the Bhattacharyya coefficient as the selection criterion. It is assumed that the only knowledge of the two classes of pattern vectors is the mean vectors and the covariance matrices, and the minimax approach is attractive for this case because it is relatively insensitive to the variation of the true distributions. The maximization of the Bhattacharyya coefficient is discussed briefly.