scispace - formally typeset
Search or ask a question

Showing papers on "Random forest published in 1994"


Posted Content
TL;DR: The authors investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data and found that smaller decision trees are on average less accurate than the average accuracy of slightly larger trees.
Abstract: We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set of consistent decision trees and the factors that affect the accuracy of individual trees. In particular, we investigated the relationship between the size of a decision tree consistent with some training data and the accuracy of the tree on test data. The experiments were performed on a massively parallel Maspar computer. The results of the experiments on several artificial and two real world problems indicate that, for many of the problems investigated, smaller consistent decision trees are on average less accurate than the average accuracy of slightly larger trees.

82 citations


Journal ArticleDOI
TL;DR: In this paper, the authors report on a series of experiments in which all decision trees consistent with the training data are constructed and run to gain an understanding of the properties of the set.
Abstract: We report on a series of experiments in which all decision trees consistent with the training data are constructed. These experiments were run to gain an understanding of the properties of the set ...

3 citations


Proceedings ArticleDOI
P. Gabbert1
02 Oct 1994
TL;DR: Several new splitting criteria for constructing trees using uncertain learning sets are suggested and additional extensions are provided for developing new misclassification estimates and test sample estimates for the decision tree.
Abstract: Decision trees are structures which form a partition of some given measurement space for the purpose of classifying new measurements. Typically, the construction of the decision tree is performed using a learning set (or training set) where the class of each observation is certain. An uncertain learning set is a collection of measurements and probability masses over the set of classes for each measurement. The uncertainty in the learning set affects several of the results on decision trees previously presented in the literature. This paper suggests several new splitting criteria for constructing trees using uncertain learning sets. Additional extensions are provided for developing new misclassification estimates and test sample estimates for the decision tree. Finally, an application of these techniques in sensor fusion is outlined. >

2 citations