Open Access
Classification and Regression by randomForest
Andy Liaw,Matthew C. Wiener +1 more
Reads0
Chats0
TLDR
random forests are proposed, which add an additional layer of randomness to bagging and are robust against overfitting, and the randomForest package provides an R interface to the Fortran programs by Breiman and Cutler.Abstract:
Recently there has been a lot of interest in “ensemble learning” — methods that generate many classifiers and aggregate their results. Two well-known methods are boosting (see, e.g., Shapire et al., 1998) and bagging Breiman (1996) of classification trees. In boosting, successive trees give extra weight to points incorrectly predicted by earlier predictors. In the end, a weighted vote is taken for prediction. In bagging, successive trees do not depend on earlier trees — each is independently constructed using a bootstrap sample of the data set. In the end, a simple majority vote is taken for prediction. Breiman (2001) proposed random forests, which add an additional layer of randomness to bagging. In addition to constructing each tree using a different bootstrap sample of the data, random forests change how the classification or regression trees are constructed. In standard trees, each node is split using the best split among all variables. In a random forest, each node is split using the best among a subset of predictors randomly chosen at that node. This somewhat counterintuitive strategy turns out to perform very well compared to many other classifiers, including discriminant analysis, support vector machines and neural networks, and is robust against overfitting (Breiman, 2001). In addition, it is very user-friendly in the sense that it has only two parameters (the number of variables in the random subset at each node and the number of trees in the forest), and is usually not very sensitive to their values. The randomForest package provides an R interface to the Fortran programs by Breiman and Cutler (available at http://www.stat.berkeley.edu/ users/breiman/). This article provides a brief introduction to the usage and features of the R functions.read more
Citations
More filters
Journal ArticleDOI
Giant virus diversity and host interactions through global metagenomics
Frederik Schulz,Simon Roux,David Paez-Espino,Sean P. Jungbluth,David A. Walsh,Vincent J. Denef,Katherine D. McMahon,Konstantinos T. Konstantinidis,Emiley A. Eloe-Fadrosh,Nikos C. Kyrpides,Tanja Woyke +10 more
TL;DR: It is anticipated that the global diversity of NCLDVs that are described here will establish giant viruses—which are associated with most major eukaryotic lineages—as important players in ecosystems across Earth’s biomes.
Journal ArticleDOI
Satellite-based soybean yield forecast: Integrating machine learning and weather data for improving crop yield prediction in southern Brazil
Rai Schwalbert,Rai Schwalbert,Telmo Jorge Carneiro Amado,Geomar Mateus Corassa,Luan Pierre Pott,Luan Pierre Pott,P. V. Vara Prasad,Ignacio A. Ciampitti +7 more
TL;DR: In this paper, the performance of three different algorithms (multivariate OLS linear regression, random forest and LSTM neural networks) for forecasting soybean yield using NDVI, EVI, land surface temperature and precipitation as independent variables, and evaluate how early (during the soybean growing season) this method is able to forecast yield with reasonable accuracy.
Journal ArticleDOI
Random forest regression and spectral band selection for estimating sugarcane leaf nitrogen concentration using EO-1 Hyperion hyperspectral data
TL;DR: In this paper, the authors explored the potential of a random forest regression algorithm for selecting spectral features in hyperspectral data necessary for predicting sugarcane leaf N concentration, which can be used as a feature selection and regression method to analyse the spectral data.
Journal ArticleDOI
Obesity dependent metabolic signatures associated with nonalcoholic fatty liver disease progression
Jonathan Barr,Juan Caballería,Ibon Martínez-Arranz,Agustín Domínguez-Díez,Cristina Alonso,Jordi Muntané,Miriam Pérez-Cormenzana,Carmelo García-Monzón,Rebeca Mayo,Antonio Martín-Duce,Manuel Romero-Gómez,Lo Iacono O,Joan Tordjman,Raúl J. Andrade,Pérez-Carreras M,Le Marchand-Brustel Y,Albert Tran,Fernández-Escalante C,Arévalo E,Mayte García-Unzueta,Karine Clément,Javier Crespo,Philippe Gual,Manuel Gómez-Fleitas,María L. Martínez-Chantar,A. Castro,Shelly C. Lu,Mercedes Vazquez-Chantada,José M. Mato +28 more
TL;DR: The present data, indicating that a BMI-dependent serum metabolic profile may be able to reliably distinguish NASH from steatosis patients, have significant implications for the development of NASH biomarkers and potential novel targets for therapeutic intervention.
Journal ArticleDOI
Machine Learning in Radiology: Applications Beyond Image Interpretation
Paras Lakhani,Adam Prater,R. Kent Hutson,Kathy P. Andriole,Keith J. Dreyer,José M. Morey,Luciano M. Prevedello,Toshi J. Clark,J. Raymond Geis,Jason N. Itri,C. Matthew Hawkins +10 more
TL;DR: An overview of machine learning, its application to radiology and other domains, and many cases of use that do not involve image interpretation is described, to help radiology practices prepare for the future and realize performance improvement and efficiency gains.
References
More filters
Modern Applied Statistics With S
TL;DR: The modern applied statistics with s is universally compatible with any devices to read, and is available in the digital library an online access to it is set as public so you can download it instantly.
Proceedings Article
Boosting the margin: A new explanation for the effectiveness of voting methods
TL;DR: In this paper, the authors show that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero.
Journal ArticleDOI
Estimating Generalization Error on Two-Class Datasets Using Out-of-Bag Estimates
TL;DR: For two-class datasets, a method for estimating the generalization error of a bag using out-of-bag estimates is provided and most of the bias is eliminated and accuracy is increased by incorporating a correction based on the distribution of the out- of-bag votes.