scispace - formally typeset
Search or ask a question

Showing papers by "Richard P. Lippmann published in 1990"


Proceedings Article
01 Oct 1990
TL;DR: The results suggest that genetic algorithms are becoming practical for pattern classification problems as faster serial and parallel computers are developed.
Abstract: Genetic algorithms were used to select and create features and to select reference exemplar patterns for machine vision and speech pattern classification tasks. For a complex speech recognition task, genetic algorithms required no more computation time than traditional approaches to feature selection but reduced the number of input features required by a factor of five (from 153 to 33 features). On a difficult artificial machine-vision task, genetic algorithms were able to create new features (polynomial functions of the original features) which reduced classification error rates from 19% to almost 0%. Neural net and k nearest neighbor (KNN) classifiers were unable to provide such low error rates using only the original features. Genetic algorithms were also used to reduce the number of reference exemplar patterns for a KNN classifier. On a 338 training pattern vowel-recognition problem with 10 classes, genetic algorithms reduced the number of stored exemplars from 338 to 43 without significantly increasing classification error rate. In all applications, genetic algorithms were easy to apply and found good solutions in many fewer trials than would be required by exhaustive search. Run times were long, but not unreasonable. These results suggest that genetic algorithms are becoming practical for pattern classification problems as faster serial and parallel computers are developed.

82 citations


Proceedings Article
01 Oct 1990
TL;DR: The results suggest that the selection of a classifier for a particular task should be guided not so much by small differences in error rate, but by practical considerations concerning memory usage, computational resources, ease of implementation, and restrictions on training and classification times.
Abstract: Seven different pattern classifiers were implemented on a serial computer and compared using artificial and speech recognition tasks. Two neural network (radial basis function and high order polynomial GMDH network) and five conventional classifiers (Gaussian mixture, linear tree, K nearest neighbor, KD-tree, and condensed K nearest neighbor) were evaluated. Classifiers were chosen to be representative of different approaches to pattern classification and to complement and extend those evaluated in a previous study (Lee and Lippmann, 1989). This and the previous study both demonstrate that classification error rates can be equivalent across different classifiers when they are powerful enough to form minimum error decision regions, when they are properly tuned, and when sufficient training data is available. Practical characteristics such as training time, classification time, and memory requirements, however, can differ by orders of magnitude. These results suggest that the selection of a classifier for a particular task should be guided not so much by small differences in error rate, but by practical considerations concerning memory usage, computational resources, ease of implementation, and restrictions on training and classification times.

60 citations


Proceedings ArticleDOI
17 Jun 1990
TL;DR: On a difficult artificial machine-vision task, genetic algorithms were able to create new features (polynomial functions of the original features) which dramatically reduced classification error rates.
Abstract: Genetic algorithms were used for feature selection and creation in two pattern-classification problems. On a machine-version inspection task, it was found that genetic algorithms performed no better than conventional approaches to feature selection but required much more computation. On a difficult artificial machine-vision task, genetic algorithms were able to create new features (polynomial functions of the original features) which dramatically reduced classification error rates. Neural network and nearest-neighbor classifiers were unable to provide such low error rates using only the original features

34 citations



Proceedings ArticleDOI
17 Jun 1990
TL;DR: A physiological front- end preprocessor for speech recognition was evaluated using a large isolated-word database in quiet and noise and provided a slight improvement in error rate at very low SNRs but required substantially more computation than the mel-filter-bank front-end for normal speech.
Abstract: A physiological front-end preprocessor for speech recognition was evaluated using a large isolated-word database in quiet and noise. The front-end was based on the ensemble interval histogram (EIH) model developed by O. Ghitza. This model provides phase or synchrony information similar to that available on the auditory nerve. A modified EIH front-end was implemented and was tested using the Lincoln robust hidden Markov model isolated-word recognizer with a multistyle database at various signal-to-noise ratios (SNRs). The modified EIH front-end performed as well as a conventional mel-filter-bank front-end for normal speech. It provided a slight improvement in error rate at very low SNRs but required substantially more computation than the mel-filter-bank front-end

1 citations