scispace - formally typeset
Search or ask a question

Showing papers on "Feature selection published in 1982"


Journal ArticleDOI
TL;DR: In this paper, the selection of variables for allocation procedures is examined and two types of technique are discussed, namely, those which use group separation as the criterion for variable selection and those which more appropriately employ error rates in allocation.
Abstract: The literature dealing with the selection of variables for allocation procedures is examined. Two types of technique are discussed, namely, those which use group separation as the criterion for variable selection and those which more appropriately employ error rates in allocation.

86 citations


Journal ArticleDOI
TL;DR: The problem of assessing the relative importance of variable subsets in discriminant analysis is discussed in this paper, where techniques for determining those subsets which are "adequate" for discrimination from the descriptive viewpoint are discussed.
Abstract: The problem of assessing the relative importance of variable subsets in discriminant analysis is discussed. Attention is focused on techniques for determining those subsets which are ‘adequate’ for discrimination from the descriptive viewpoint. A number of procedures which have been proposed or used in the literature are described, illustrated and compared with reference to various aspects, including rationale, statistical significance testing and computational difficulties.

75 citations


01 Nov 1982
TL;DR: The successful application of statistical variable selection techniques to fit splines to solve knot elimination problems and is compared in detail with two other spline-fitting methods and several statistical software packages.
Abstract: The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

52 citations


Book ChapterDOI
TL;DR: This chapter considers the measurement selection techniques of the discriminating category that use the selection method as a means for data reduction.
Abstract: Publisher Summary One of the major problems one encounters during the design phase of an automatic pattern recognition system is the identification of a good set of measurements. These measurements, to be performed on future unclassified patterns, should enable the recognition system to classify the patterns as correctly as possible. The identification of a measurement set is generally carried out in two phases. The first phase, pattern analysis, uses a variety of techniques that allow the designer to explore raw pattern data, and to infer some of its structure. In the second phase, commonly called feature selection or measurement selection, the preliminary measurement set must be reduced in size to meet the cost/performance trade-off. Various techniques exist to achieve the data reduction called for. One can characterize these techniques either according to the way in which the data reduction was achieved or according to the purpose of the reduced data. This chapter considers the measurement selection techniques of the discriminating category that use the selection method as a means for data reduction.

10 citations


Proceedings ArticleDOI
Hermann Ney1, R. Gierloff
01 May 1982
TL;DR: The experiments indicate that feature weighting and feature selection can reduce the error rates by a factor of two or more both for speaker identification and speaker verification.
Abstract: This paper describes a technique for increasing the ability of a text-dependent speaker recognition system to discriminate between speaker classes; this technique is to be performed in conjunction with the nonlinear time alignment between a reference pattern and a test pattern. Unlike the standard approach, where the training of the recognition system merely consists of storing and averaging or selecting the time normalized training patterns separately for each class, the training phase of the system is extended in that a weight is determined for each individual feature component of the complete reference pattern according to the ability of the feature to distinguish between speaker classes. The weights depend on the time axis as well as on the frequency axis. The overall distance computed after nonlinear time alignment between a reference pattern and a test pattern thus becomes a function of the given set of weights of the reference class considered. For each class, the optimum weights result from the ideal criterion of minimum error rate. Instead of this criterion, the closely related but mathematically more convenient Fisher criterion is used that leads to a closed from solution for the unknown weights. Based on these weights, the selection of subsets of effective features is studied in order to further improve the class discrimination. The feature weighting and selecting techniques are tested using a data base of utterances recorded off dialed-up telephone lines. The experiments indicate that feature weighting and feature selection can reduce the error rates by a factor of two or more both for speaker identification and speaker verification.

9 citations


Journal ArticleDOI
TL;DR: It has been shown that using a computationally simple technique it is possible to classify single event-related potentials associated with upper and lower visual field stimulation with a high degree of accuracy.

7 citations


Journal ArticleDOI
TL;DR: Low accuracy of selection of the best version of a pattern recognition system in small test sample case is demonstrated, and it is suggested to solve several similar pattern recognition problems simultaneously simultaneously.

6 citations


Journal ArticleDOI
TL;DR: This geometrical approach provides valuable insight into why different feature subsets may or may not have high discriminatory potential, and shows that clustering in the dual space, or its subspaces, does not necessarily yield an effective feature selection technique.

5 citations


DOI
C. B. Chittineni1
01 May 1982
TL;DR: Details of a system designed to inspect web-type products are presented, and results of a study on inspection of magnetic tapes and abrasive sheets are described.
Abstract: Details of a system designed to inspect web-type products are presented. The sensing device used is a laser scanner, and a brief description of the scanner is given. A set of bandpass filters is proposed as feature extracters. The required characteristics of the filters are determined through digital simulation, using feature selection methods. The linear classifier is designed from a set of training signals. Contextual information is used in the classification of signals into good product or into various defective categories. In addition, results of a study on inspection of magnetic tapes and abrasive sheets are described.

4 citations



Proceedings ArticleDOI
01 Nov 1982
TL;DR: Optimum results were obtained by combining distance based feature selection methods with nonlinear discriminant analysis and the successive solution of 2-class problems improves the results compared to the solution of the 3-class problem.
Abstract: Numerical experiments were performed to find optimum feature extraction procedures for the classification of mouse L-fibroblasts into Gl, S and G2 subpopulations. From images of these cells different feature sets such as geometric, densitometric, textural and chromatin features were derived which served as data base for the numerical experiments. Linear and nonlinear supervised stepwise learning techniques for the discrimination of the cells into Gl, S and G2 were performed. The classification error was used as criterion for the evaluation of the different numerical feature selection methods. Optimum results were obtained by combining distance based feature selection methods with nonlinear discriminant analysis. The successive solution of 2-class problems improves the results compared to the solution of the 3-class problem. Linear discriminant analysis then may surpass quadratic discriminant analysis.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: Based on a pattern recognition study of human chromosomes a model is proposed for selection of attributes for automatic pattern recognition, a stepwise data compression, each step being monitored for loss of significant information by visual classification of the compressed pattern.

Journal ArticleDOI
TL;DR: A model is proposed which describes the organizational rules or criteria employed by human listeners when comparing members of a set of complex sounds, and assumes that feature selection is based on a Karhunen Loeve expansion of the low-level representations of sound samples.

Book ChapterDOI
01 Jan 1982
TL;DR: In this paper, the RV Coefficient is shown to be an acceptable criterion for variable selection in multivariate linear regression and an efficient computational procedure for forward selection based on this criterion is detailed.
Abstract: The RV Coefficient is shown to be an acceptable criterion for variable selection in multivariate linear regression. An efficient computational procedure for forward selection based on this criterion is detailed. Finally, an example is given.

Proceedings ArticleDOI
01 Nov 1982
TL;DR: MATUSITA affinity/6/ gives sharp upper bounds, the divergence /4/ lower bounds on the probabilities of misclassification, and the properties of these two distance measures are discussed.
Abstract: Distance measures of distributions are often used to estimate upper and lower bounds on the probabilities of misclassification. Sharp lower and upper bounds are of great importance for feature selection, that means for classification oriented feature interpretation. MATUSITA affinity/6/ gives sharp upper bounds, the divergence /4/ lower bounds on the probabilities of misclassification. This paper discusses the properties of these two distance measures. Other measure are compared at length in /9/.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.