scispace - formally typeset
Open Access

Classification and Regression by randomForest

Reads0
Chats0
TLDR
random forests are proposed, which add an additional layer of randomness to bagging and are robust against overfitting, and the randomForest package provides an R interface to the Fortran programs by Breiman and Cutler.
Abstract
Recently there has been a lot of interest in “ensemble learning” — methods that generate many classifiers and aggregate their results. Two well-known methods are boosting (see, e.g., Shapire et al., 1998) and bagging Breiman (1996) of classification trees. In boosting, successive trees give extra weight to points incorrectly predicted by earlier predictors. In the end, a weighted vote is taken for prediction. In bagging, successive trees do not depend on earlier trees — each is independently constructed using a bootstrap sample of the data set. In the end, a simple majority vote is taken for prediction. Breiman (2001) proposed random forests, which add an additional layer of randomness to bagging. In addition to constructing each tree using a different bootstrap sample of the data, random forests change how the classification or regression trees are constructed. In standard trees, each node is split using the best split among all variables. In a random forest, each node is split using the best among a subset of predictors randomly chosen at that node. This somewhat counterintuitive strategy turns out to perform very well compared to many other classifiers, including discriminant analysis, support vector machines and neural networks, and is robust against overfitting (Breiman, 2001). In addition, it is very user-friendly in the sense that it has only two parameters (the number of variables in the random subset at each node and the number of trees in the forest), and is usually not very sensitive to their values. The randomForest package provides an R interface to the Fortran programs by Breiman and Cutler (available at http://www.stat.berkeley.edu/ users/breiman/). This article provides a brief introduction to the usage and features of the R functions.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

METAGENassist: a comprehensive web server for comparative metagenomics

TL;DR: A freely accessible, easy-to-use web server for comparative metagenomic analysis called METAGENassist, which allows users to perform a variety of multivariate and univariate data analyses including fold change analysis, t-tests, PCA, PLS-DA, clustering and classification.
Posted Content

Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation

TL;DR: Individual conditional expectation (ICE) plots are presented, a tool for visualizing the model estimated by any supervised learning algorithm and highlight the variation in the fitted values across the range of a covariate, suggesting where and to what extent heterogeneities might exist.
Journal ArticleDOI

Digital mapping of soil organic matter stocks using Random Forest modeling in a semi-arid steppe ecosystem

TL;DR: In this article, the authors evaluated a Digital Soil Mapping (DSM) approach to model the spatial distribution of stocks of soil organic carbon (SOC), total carbon (Ctot), total nitrogen (Ntot) and total sulphur (Stot) for a data-sparse, semi-arid catchment in Inner Mongolia, Northern China.
Proceedings ArticleDOI

COUNT Forest: CO-Voting Uncertain Number of Targets Using Random Forest for Crowd Density Estimation

TL;DR: This paper presents a patch-based approach for crowd density estimation in public scenes that achieved state-of-the-art results on the public Mall dataset and UCSD dataset, and also proposed two potential applications in traffic counts and scene understanding with promising results.
References
More filters

Modern Applied Statistics With S

TL;DR: The modern applied statistics with s is universally compatible with any devices to read, and is available in the digital library an online access to it is set as public so you can download it instantly.
Proceedings Article

Boosting the margin: A new explanation for the effectiveness of voting methods

TL;DR: In this paper, the authors show that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero.
Journal ArticleDOI

Estimating Generalization Error on Two-Class Datasets Using Out-of-Bag Estimates

TL;DR: For two-class datasets, a method for estimating the generalization error of a bag using out-of-bag estimates is provided and most of the bias is eliminated and accuracy is increased by incorporating a correction based on the distribution of the out- of-bag votes.