scispace - formally typeset
Open AccessPosted Content

The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond

TLDR
In this article, the KL-UCB algorithm was shown to have a uniformly better regret bound than UCB or UCB2 and reached the lower bound of Lai and Robbins for Bernoulli rewards.
Abstract
This paper presents a finite-time analysis of the KL-UCB algorithm, an online, horizon-free index policy for stochastic bandit problems. We prove two distinct results: first, for arbitrary bounded rewards, the KL-UCB algorithm satisfies a uniformly better regret bound than UCB or UCB2; second, in the special case of Bernoulli rewards, it reaches the lower bound of Lai and Robbins. Furthermore, we show that simple adaptations of the KL-UCB algorithm are also optimal for specific classes of (possibly unbounded) rewards, including those generated from exponential families of distributions. A large-scale numerical study comparing KL-UCB with its main competitors (UCB, UCB2, UCB-Tuned, UCB-V, DMED) shows that KL-UCB is remarkably efficient and stable, including for short time horizons. KL-UCB is also the only method that always performs better than the basic UCB policy. Our regret bounds rely on deviations results of independent interest which are stated and proved in the Appendix. As a by-product, we also obtain an improved regret bound for the standard UCB algorithm.

read more

Citations
More filters
Journal ArticleDOI

Kullback-Leibler upper confidence bounds for optimal sequential allocation

TL;DR: The main contribution is a unified finite-time analysis of the regret of these algorithms that asymptotically matches the lower bounds of Lai and Robbins (1985) and Burnetas and Katehakis (1996), respectively.
Posted Content

Cascading Bandits: Learning to Rank in the Cascade Model

TL;DR: Cascade bandits as mentioned in this paper is a learning variant of the cascade model where the objective is to identify $K$ most attractive items and formulate the problem as a stochastic combinatorial partial monitoring problem.
Journal ArticleDOI

Optimistic Bayesian sampling in contextual-bandit problems

TL;DR: An approach of Thompson (1933) which makes use of samples from the posterior distributions for the instantaneous value of each action is considered, and a new algorithm, Optimistic Bayesian Sampling (OBS), which performs competitively when compared to recently proposed benchmark algorithms and outperforms Thompson's method throughout.
Journal Article

Combinatorial multi-armed bandit and its extension to probabilistically triggered arms

TL;DR: In this article, the authors define a general framework for combinatorial multi-armed bandit (CMAB) problems, where subsets of base arms with unknown distributions form super arms and the reward of the super arm depends on the outcomes of all played arms.
Proceedings ArticleDOI

Stochastic bandits robust to adversarial corruptions

TL;DR: In this article, the authors introduce a new model of stochastic bandits with adversarial corruptions, which aims to capture settings where most of the input follows a stochian pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam.
References
More filters
Journal ArticleDOI

Finite-time Analysis of the Multiarmed Bandit Problem

TL;DR: This work shows that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.
Journal ArticleDOI

Sample mean based index policies by O(log n) regret for the multi-armed bandit problem

TL;DR: This paper constructs index policies that depend on the rewards from each arm only through their sample mean, and achieves a O(log n) regret with a constant that is based on the Kullback–Leibler number.
Journal ArticleDOI

Exploration-exploitation tradeoff using variance estimates in multi-armed bandits

TL;DR: A variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account the empirical variance of the different arms is considered, providing the first analysis of the expected regret for such algorithms.
Book ChapterDOI

PAC Bounds for Multi-armed Bandit and Markov Decision Processes

TL;DR: The bandit problem is revisited and considered under the PAC model, and it is shown that given n arms, it suffices to pull the arms O(n/?2 log 1/?) times to find an ?-optimal arm with probability of at least 1 - ?.
Related Papers (5)