Open Access
Classification and Regression by randomForest
Andy Liaw,Matthew C. Wiener +1 more
Reads0
Chats0
TLDR
random forests are proposed, which add an additional layer of randomness to bagging and are robust against overfitting, and the randomForest package provides an R interface to the Fortran programs by Breiman and Cutler.Abstract:
Recently there has been a lot of interest in “ensemble learning” — methods that generate many classifiers and aggregate their results. Two well-known methods are boosting (see, e.g., Shapire et al., 1998) and bagging Breiman (1996) of classification trees. In boosting, successive trees give extra weight to points incorrectly predicted by earlier predictors. In the end, a weighted vote is taken for prediction. In bagging, successive trees do not depend on earlier trees — each is independently constructed using a bootstrap sample of the data set. In the end, a simple majority vote is taken for prediction. Breiman (2001) proposed random forests, which add an additional layer of randomness to bagging. In addition to constructing each tree using a different bootstrap sample of the data, random forests change how the classification or regression trees are constructed. In standard trees, each node is split using the best split among all variables. In a random forest, each node is split using the best among a subset of predictors randomly chosen at that node. This somewhat counterintuitive strategy turns out to perform very well compared to many other classifiers, including discriminant analysis, support vector machines and neural networks, and is robust against overfitting (Breiman, 2001). In addition, it is very user-friendly in the sense that it has only two parameters (the number of variables in the random subset at each node and the number of trees in the forest), and is usually not very sensitive to their values. The randomForest package provides an R interface to the Fortran programs by Breiman and Cutler (available at http://www.stat.berkeley.edu/ users/breiman/). This article provides a brief introduction to the usage and features of the R functions.read more
Citations
More filters
Journal ArticleDOI
Fine-grained just-in-time defect prediction
TL;DR: This paper investigates to what extent commits are partially defective; then, a novel fine-grained just-in-time defect prediction model is proposed to predict the specific files, contained in a commit, that are defective; and the extent to which it decreases the effort required to diagnose a defect is evaluated.
Journal ArticleDOI
Updating soil survey maps using random forest and conditioned Latin hypercube sampling in the loess derived soils of northern Iran
Mohammad Reza Pahlavan Rad,Norair Toomanian,Farhad Khormali,Colby W. Brungard,Chooghi Bayram Komaki,Patrick Bogaert +5 more
TL;DR: In this article, the authors investigated the use of conditioned Latin hypercube sampling and random forest modeling for mapping Soil Taxonomy great group, subgroup and series levels for ~85,000 ha in Golestan Province, Iran.
Journal ArticleDOI
Grain yield prediction of rice using multi-temporal UAV-based RGB and multispectral images and model transfer – a case study of small farmlands in the South of China
Liang Wan,Haiyan Cen,Jiangpeng Zhu,Jiafei Zhang,Yueming Zhu,Dawei Sun,Xiaoyue Du,Li Zhai,Weng Haiyong,Yijian Li,Li Xiaoran,Yidan Bao,Jianyao Shou,Yong He +13 more
TL;DR: In this paper, the authors explored the potential of fusing spectral and structural information extracted from the UAV-based images in the whole growth period of rice to improve the grain yield prediction.
Journal ArticleDOI
A model framework for discovering the spatio-temporal usage patterns of public free-floating bike-sharing system
TL;DR: A model framework to explore the spatio-temporal usage patterns of free-floating shared bikes using the usage data is presented and insights for the promotion and dynamic deployment of the bike-sharing system in urban areas are provided.
Journal ArticleDOI
Machine learning approaches for forest classification and change analysis using multi-temporal Landsat TM images over Huntington Wildlife Forest
TL;DR: In this article, three machine learning approaches (i.e., decision trees, random forest, and support vector machines) were used to classify local forest communities at the Huntington Wildlife Forest (HWF) located in the central Adirondack Mountains of New York State, and to identify forest type change over a 20-year period using multi-temporal Landsat satellite Thematic Mapper (TM) data.
References
More filters
Modern Applied Statistics With S
TL;DR: The modern applied statistics with s is universally compatible with any devices to read, and is available in the digital library an online access to it is set as public so you can download it instantly.
Proceedings Article
Boosting the margin: A new explanation for the effectiveness of voting methods
TL;DR: In this paper, the authors show that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero.
Journal ArticleDOI
Estimating Generalization Error on Two-Class Datasets Using Out-of-Bag Estimates
TL;DR: For two-class datasets, a method for estimating the generalization error of a bag using out-of-bag estimates is provided and most of the bias is eliminated and accuracy is increased by incorporating a correction based on the distribution of the out- of-bag votes.