scispace - formally typeset
Search or ask a question

Showing papers by "William G. Macready published in 1996"


Posted Content
TL;DR: An analytically simple bandit model is provided that is more directly applicable to optimization theory than the traditional bandit problem, and a near-optimal strategy is determined for that model.
Abstract: We explore the 2-armed bandit with Gaussian payoffs as a theoretical model for optimization. We formulate the problem from a Bayesian perspective, and provide the optimal strategy for both 1 and 2 pulls. We present regions of parameter space where a greedy strategy is provably optimal. We also compare the greedy and optimal strategies to a genetic-algorithm-based strategy. In doing so we correct a previous error in the literature concerning the Gaussian bandit problem and the supposed optimality of genetic algorithms for this problem. Finally, we provide an analytically simple bandit model that is more directly applicable to optimization theory than the traditional bandit problem, and determine a near-optimal strategy for that model.

17 citations


Posted Content
TL;DR: In this paper, the bias-variance decomposition is used to estimate the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm.
Abstract: In bagging [Bre94a] one uses bootstrap replicates of the training set [Efr79, ET93] to try to improve a learning algorithm's performance. The computational requirements for estimating the resultant generalization error on a test set by means of cross-validation are often prohibitive; for leave-one-out cross-validation one needs to train the underlying algorithm on the order of $m u$ times, where m is the size of the training set and $ u$ is the number of replicates. This paper presents several techniques for exploiting the bias-variance decomposition [GBD92, Wol96] to estimate the generalization error of a bagged learning algorithm without invoking yet more training of the underlying learning algorithm. The best of our estimators exploits stacking [Wol92]. In a set of experiments reported here, it was found to be more accurate than both the alternative cross-validation-based estimator of the bagged algorithm's error and the cross-validation-based estimator of the underlying algorithm's error. This improvement was particularly pronounced for small test sests. This suggests a novel justification for using bagging---improved estimation of generalization error. Key words. machine learning, regression, bootstrap, bagging

4 citations