Random Forests
Citations
3,857 citations
3,703 citations
Cites background or methods from "Random Forests"
...More precisely, the random forest is an ensemble method where the weak learners are decision trees trained on random subsamples of the data [24]....
[...]
...Introduced in 2001 [24], random forests are a class of scalable and highly parallelizable regression models that have been very successful in practice [42]....
[...]
3,699 citations
3,579 citations
3,368 citations
Cites background or methods from "Random Forests"
...Random forests (hereafter RF) is one such method (Breiman 2001)....
[...]
...For the classification situation, Breiman (2001) showed that classification accuracy can be significantly improved by aggregating the results of many classifiers that have little bias by averaging or voting, if the classifiers have low pairwise correlations....
[...]
References
[...]
16,118 citations
7,601 citations
"Random Forests" refers background or methods in this paper
...But none of these these three forests do as well as Adaboost (Freund & Schapire, 1996) or other algorithms that work by adaptive reweighting (arcing) of the training set (see Breiman, 1998b; Dieterrich, 1998; Bauer & Kohavi, 1999)....
[...]
...In its original version, Adaboost (Freund & Schapire, 1996) is a deterministic algorithm that selects the weights on the training set for input to the next classifier based on the misclassifications in the previous classifiers....
[...]
5,984 citations
"Random Forests" refers background in this paper
...Ho (1998) has written a number of papers on “the random subspace” method which does a random selection of a subset of features to use to grow each tree....
[...]
...Keywords: classification, regression, ensemble...
[...]
2,919 citations
2,686 citations
"Random Forests" refers background or methods in this paper
...But none of these these three forests do as well as Adaboost (Freund & Schapire, 1996) or other algorithms that work by adaptive reweighting (arcing) of the training set (see Breiman, 1998b; Dieterrich, 1998; Bauer & Kohavi, 1999)....
[...]
...The second is that bagging can be used to give ongoing estimates of the generalization error (PE∗) of the combined ensemble of trees, as well as estimates for the strength and correlation....
[...]