scispace - formally typeset
Search or ask a question

Showing papers on "Feature selection published in 1984"


Journal ArticleDOI
01 May 1984
TL;DR: On etudie divers algorithmes de calcul en considerant seulement les modeles lineaires and le critere des moindres carres as mentioned in this paper, a.k. a.
Abstract: On etudie divers algorithmes de calcul en considerant seulement les modeles lineaires et le critere des moindres carres

287 citations


Journal ArticleDOI
01 Sep 1984
TL;DR: An optimal method for finding a minimum feature subset based on box classifiers is described, and numerical examples are presented to illustrate the effectiveness of the approach.
Abstract: An optimal method for finding a minimum feature subset based on box classifiers is described. Feature selection is represented as a problem of zero-one integer programming. An implicit enumeration method is developed in order to solve this problem. Numerical examples are presented to illustrate the effectiveness of the approach.

44 citations


Journal ArticleDOI
TL;DR: The design of tree classifiers is considered from the statistical point of view, and the resulting tree classifier realizes a soft-decision strategy in contrast to the hard-Decision strategy of the conventional decision tree.

43 citations


Journal ArticleDOI
TL;DR: In this paper, a method is developed to extract additional image spatial features by means of linear and non-linear local filtering, and feature selection methods are also developed, since it is usually not possible to use all the generated features.
Abstract: Feature extraction is an important factor in determining the accuracy that can be attained in the classification of multispectral images. The traditional per point classification methods do not use all the available information, since they disregard the spatial relationships that exist among pixels belonging to the same class. In this paper, methods are developed to extract additional image spatial features by means of linear and non-linear local filtering. Feature selection methods are also developed, since it is usually not possible to use all the generated features. The classification stage is performed in a supervised mode using the maximum likelihood criterion. A quantitative analysis of the performance of the spatial features show that an overall increase in precision of classification is achieved, although at the expense of increased rejection levels, particularly on the borders between different fields.

31 citations


Journal ArticleDOI
TL;DR: Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I and it is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form.
Abstract: Several authors have studied the problem of dimensionality reduction or feature selection using statistical distance measures, e.g., the Chernoff coefficient, Bhattacharyya distance, I-divergence, and J-divergence because they generally felt that direct use of the probability of classification error expression was either computationally or mathematically intractable. We show that for the difficult problem of testing one weakly stationary Gaussian stochastic process against another when the mean vectors are similar and the covariance matrices (patterns) differ, the probability of error expression may be dealt with directly using a combination of classical methods and distribution function theory. The results offer a new and accurate finite dimensionality information-theoretic strategy to feature selection, and are shown, by use of examples, to be superior to the well-known Kadota-Shepp approach which employs distance measures and asymptotics in its formulation. The present Part I deals with the theory; Part II deals with the implementation of a computer-based real-time pattern classifier which takes into account a realistic quasi-stationarity of the patterns.

25 citations



Journal Article
TL;DR: The error-measure feature selection method provides a useful, systematic method of developing and evaluating scene-segmentation algorithms using error-Measure minimization.
Abstract: Development of scene-segmentation algorithms has generally been an ad hoc process. This paper presents a systematic technique for developing these algorithms using error-measure minimization. If scene segmentation is regarded as a problem of pixel classification whereby each pixel of a scene is assigned to a particular object class, development of a scene-segmentation algorithm becomes primarily a process of feature selection. In this study, four methods of feature selection were used to develop segmentation techniques for cervical cytology images: (1) random selection, (2) manual selection (best features in the subjective judgment of the investigator), (3) eigenvector selection (ranking features according to the largest contribution to each eigenvector of the feature covariance matrix) and (4) selection using the scene-segmentation error measure A2. Four features were selected by each method from a universe of 35 features consisting of gray level, color, texture and special pixel neighborhood features in 40 cervical cytology images . Evaluation of the results was done with a composite of the scene-segmentation error measure A2, which depends on the percentage of scenes with measurable error, the agreement of pixel class proportions, the agreement of number of objects for each pixel class and the distance of each misclassified pixel to the nearest pixel of the misclassified class. Results indicate that random and eigenvector feature selection were the poorest methods, manual feature selection somewhat better and error-measure feature selection best. The error-measure feature selection method provides a useful, systematic method of developing and evaluating scene-segmentation algorithms.

14 citations


Journal ArticleDOI
TL;DR: An algorithm is presented that predicts the best feature dimensionality, taking into account the number of training samples, and it is demonstrated that rather small training set sizes are still practical using these techniques.

7 citations


Proceedings ArticleDOI
09 Jan 1984
TL;DR: A set of algorithms used to automatically detect, segment and classify tactical targets in FLIR (Forward Looking InfraRed) images are presented and implemented in an Intelligent Automatic Target Cueing (IATC) system.
Abstract: In this paper we present a set of algorithms used to automatically detect, segment and classify tactical targets in FLIR (Forward Looking InfraRed) images. These algorithms are implemented in an Intelligent Automatic Target Cueing (IATC) system. Target localization and segmentation is carried out using an intelligent preprocessing step followed by relaxation or a modified double gate filter followed by difference operators. The techniques make use of range, intensity and edge density information. A set of robust features of the segmented targets is computed. These features are normalized and decorrelated. Feature selection is done using the Bhattacharrya measure. Classification techniques include a set of linear, quadratic classifiers, clustering algorithms, and an efficient K-nearest neighbor algorithm. Facilities exist to use structural information, to use feedback to obtain more refined boundaries of the targets and to adapt the cuer to the required mission. The IATC incorporating the above algorithms runs in an automatic mode. The results are shown on a FLIR data base consisting of 480, 512x512, 8 bit air-to-ground images.© (1984) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

7 citations


Journal ArticleDOI
TL;DR: This paper presents another solution to the problem of finding the smallest Bayes-classification-region-preserving dimension whenever the population parameters are unknown and then compares the two solutions to the classical Wilks' method and two other recent methods using a Monte Carlo simulation.

5 citations


Journal ArticleDOI
TL;DR: A multistage strategy of search of the minimum sufficient tests subset is described, which is reliable in the sense that it allows us to find the least (globally) sufficient number of tests.


Book ChapterDOI
01 Jan 1984
TL;DR: A new concept, the DISCRIMINATORY POWER of EIGENVECTORS, was designed to suit the multivariate statistical techniques of principal component analysis and discriminant analysis at one go to purposes in the medical field.
Abstract: The necessity of feature selection and feature extraction is well accepted in the field of data processing Clinical chemistry and haematology stand also in this need but the required multivariate statistical techniques were not introduced properly A major drawback is that the considered testresults have to be applied in a descriptive as well as discriminatory way Therefore a single straightforward statistical approach is lacking So a new concept, the DISCRIMINATORY POWER of EIGENVECTORS, was designed to suit the multivariate statistical techniques of principal component analysis and discriminant analysis at one go to purposes in the medical field The syndroms of acute and chronic pancreatitis were used to illustrate the proposed method