scispace - formally typeset
Journal ArticleDOI

Combining bagging, boosting, rotation forest and random subspace methods

Sotiris Kotsiantis
- 01 Mar 2011 - 
- Vol. 35, Iss: 3, pp 223-240
Reads0
Chats0
TLDR
An ensemble of bagging, boosting, rotation forest and random subspace methods ensembles with 6 sub-classifiers in each one and then a voting methodology is used for the final prediction and the proposed technique had better accuracy in most cases.
Abstract
Bagging, boosting, rotation forest and random subspace methods are well known re-sampling ensemble methods that generate and combine a diversity of learners using the same learning algorithm for the base-classifiers. Boosting and rotation forest algorithms are considered stronger than bagging and random subspace methods on noise-free data. However, there are strong empirical indications that bagging and random subspace methods are much more robust than boosting and rotation forest in noisy settings. For this reason, in this work we built an ensemble of bagging, boosting, rotation forest and random subspace methods ensembles with 6 sub-classifiers in each one and then a voting methodology is used for the final prediction. We performed a comparison with simple bagging, boosting, rotation forest and random subspace methods ensembles with 25 sub-classifiers, as well as other well known combining methods, on standard benchmark datasets and the proposed technique had better accuracy in most cases.

read more

Citations
More filters
Journal ArticleDOI

When Gaussian Process Meets Big Data: A Review of Scalable GPs

TL;DR: In this article, a review of state-of-the-art scalable Gaussian process regression (GPR) models is presented, focusing on global and local approximations for subspace learning.
Journal ArticleDOI

Mixture of experts: a literature survey

TL;DR: A categorisation of the ME literature based on the implicit problem space partitioning using a tacit competitive process between the experts is presented, and the first group is called the mixture of implicitly localised experts (MILE), and the second is called mixture of explicitly localised Experts (MELE), as it uses pre-specified clusters.
Posted Content

When Gaussian Process Meets Big Data: A Review of Scalable GPs

TL;DR: This article is devoted to reviewing state-of-the-art scalable GPs involving two main categories: global approximations that distillate the entire data and local approximation that divide the data for subspace learning.
Journal ArticleDOI

Macular OCT Classification Using a Multi-Scale Convolutional Neural Network Ensemble

TL;DR: A novel CAD system based on a multi-scale convolutional mixture of expert (MCME) ensemble model to identify normal retina, and two common types of macular pathologies, namely, dry age-related macular degeneration, and diabetic macular edema is presented.
Journal ArticleDOI

Semantic content-based image retrieval: A comprehensive study ☆

TL;DR: This study presents a detailed overview of the CBIR framework and improvements achieved; including image preprocessing, feature extraction and indexing, system learning, benchmarking datasets, similarity matching, relevance feedback, performance evaluation, and visualization.
References
More filters
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Journal ArticleDOI

Bagging predictors

Leo Breiman
TL;DR: Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.
Journal ArticleDOI

A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting

TL;DR: The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.
Proceedings Article

Experiments with a new boosting algorithm

TL;DR: This paper describes experiments carried out to assess how well AdaBoost with and without pseudo-loss, performs on real learning problems and compared boosting to Breiman's "bagging" method when used to aggregate various classifiers.