Random Forests
Leo Breiman
- Vol. 45, Iss: 1, pp 5-32
TLDR
Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.Abstract:
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to Adaboost (Y. Freund & R. Schapire, Machine Learning: Proceedings of the Thirteenth International conference, aaa, 148–156), but are more robust with respect to noise. Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the splitting. Internal estimates are also used to measure variable importance. These ideas are also applicable to regression.read more
Citations
More filters
Journal ArticleDOI
Prediction of the landslide susceptibility: Which algorithm, which precision?
TL;DR: The first comprehensive comparison among the performances of ten advanced machine learning techniques (MLTs) including artificial neural networks (ANNs), boosted regression tree (BRT), classification and regression trees (CART), generalized linear model (GLM), generalized additive model (GAM), multivariate adaptive regression splines (MARS), naive Bayes (NB), quadratic discriminant analysis (QDA), random forest (RF), and support vector machines (SVM) is presented.
Journal ArticleDOI
Land-use/cover classification in a heterogeneous coastal landscape using RapidEye imagery: evaluating the performance of random forest and support vector machines classifiers
TL;DR: An evaluation of different RapidEye bands using the two classifiers showed that incorporation of the red-edge band has a significant effect on the overall classification accuracy in vegetation cover types, indicating pursuit of high classification accuracy using high-spatial resolution imagery on complex landscapes remains paramount.
Journal ArticleDOI
On the use of MapReduce for imbalanced big data using Random Forest
TL;DR: This work analyzes the performance of several techniques used to deal with imbalanced datasets in the big data scenario using the Random Forest classifier, and shows that there is not an approach to imbalanced big data classification that outperforms the others for all the data considered when using Random Forest.
Journal ArticleDOI
Random forest regression for online capacity estimation of lithium-ion batteries
Yi Li,Changfu Zou,Maitane Berecibar,Elise Nanini-Maury,Jonathan Cheung-Wai Chan,Peter Van Den Bossche,Joeri Van Mierlo,Noshin Omar +7 more
TL;DR: The proposed machine-learning technique, random forest regression, is able to learn the dependency of the battery capacity on the features that are extracted from the charging voltage and capacity measurements, and is promising for online battery capacity estimation.
Journal ArticleDOI
Machine learning in cell biology – teaching computers to recognize phenotypes
TL;DR: It is outlined how microscopy images can be converted into a data representation suitable for machine learning, and various state-of-the-art machine-learning algorithms are introduced, highlighting recent applications in image-based screening.
References
More filters
Journal ArticleDOI
Bagging predictors
TL;DR: Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.
Proceedings Article
Experiments with a new boosting algorithm
Yoav Freund,Robert E. Schapire +1 more
TL;DR: This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers.
Journal ArticleDOI
The random subspace method for constructing decision forests
TL;DR: A method to construct a decision tree based classifier is proposed that maintains highest accuracy on training data and improves on generalization accuracy as it grows in complexity.
Journal ArticleDOI
An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, Boosting, and Randomization
TL;DR: In this article, the authors compared the effectiveness of randomization, bagging, and boosting for improving the performance of the decision-tree algorithm C4.5 and found that in situations with little or no classification noise, randomization is competitive with bagging but not as accurate as boosting.
Journal ArticleDOI
An Empirical Comparison of Voting Classification Algorithms: Bagging, Boosting, and Variants
Eric Bauer,Ron Kohavi +1 more
TL;DR: It is found that Bagging improves when probabilistic estimates in conjunction with no-pruning are used, as well as when the data was backfit, and that Arc-x4 behaves differently than AdaBoost if reweighting is used instead of resampling, indicating a fundamental difference.