scispace - formally typeset
Open Access

Error-Correcting Output Codes: A General Method for Improving

Reads0
Chats0
TLDR
It is demonstrated that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.
Abstract
Multiclass learning problems involve finding a definition for an unknown function f(x) whose range is a discrete set containing k < 2 values (i.e., k "classes"). The definition is acquired by studying large collections of training examples of the form [xi, f(xi)]. Existing approaches to this problem include (a) direct application of multiclass algorithms such as the decision-tree algorithms ID3 and CART, (b) application of binary concept learning algorithms to learn individual binary functions for each of the k classes, and (c) application of binary concept learning algorithms with distributed output codes such as those employed by Sejnowski and Rosenberg in the NETtalk system. This paper compares these three approaches to a new technique in which BCH error-correcting codes are employed as a distributed output representation. We show that these output representations improve the performance of ID3 on the NETtalk task and of back propagation on an isolated-letter speech-recognition task. These results demonstrate that error-correcting output codes provide a general-purpose method for improving the performance of inductive learning programs on multiclass problems.

read more

Citations
More filters
Journal ArticleDOI

An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants

TL;DR: It is found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit, and that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference.
MonographDOI

Combining Pattern Classifiers

TL;DR: This combining pattern classifiers methods and algorithms helps people to enjoy a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their computer.
Book

Artificial Intelligence: A New Synthesis

TL;DR: Intelligent agents are employed as the central characters in this new introductory text and Nilsson gradually increases their cognitive horsepower to illustrate the most important and lasting ideas in AI.
Journal ArticleDOI

Diversity creation methods: a survey and categorisation

TL;DR: This paper reviews the varied attempts to provide a formal explanation of error diversity, including several heuristic and qualitative explanations in the literature, and introduces the idea of implicit and explicit diversity creation methods, and three dimensions along which these may be applied.
Journal ArticleDOI

A comparative study of feature selection and multiclass classification methods for tissue classification based on gene expression

TL;DR: It is indicated that multiclass classification problem is much more difficult than the binary one for the gene expression datasets, due to the fact that the data are of high dimensionality and that the sample size is small.
References
More filters
Book ChapterDOI

Learning internal representations by error propagation

TL;DR: This chapter contains sections titled: The Problem, The Generalized Delta Rule, Simulation Results, Some Further Generalizations, Conclusion.
Proceedings ArticleDOI

A theory of the learnable

TL;DR: This paper regards learning as the phenomenon of knowledge acquisition in the absence of explicit programming, and gives a precise methodology for studying this phenomenon from a computational viewpoint.
Book ChapterDOI

Learning Efficient Classification Procedures and Their Application to Chess End Games

TL;DR: A series of experiments dealing with the discovery of efficient classification procedures from large numbers of examples is described, with a case study from the chess end game king-rook versus king-knight.
Journal ArticleDOI

On a class of error correcting binary group codes

TL;DR: A general method of constructing error correcting binary group codes is obtained and an example is worked out to illustrate the method of construction.
Journal ArticleDOI

A time-delay neural network architecture for isolated word recognition

TL;DR: A translation-invariant back-propagation network is described that performs better than a sophisticated continuous acoustic parameter hidden Markov model on a noisy, 100-speaker confusable vocabulary isolated word recognition task.
Related Papers (5)