scispace - formally typeset
Open AccessProceedings Article

The KL-UCB Algorithm for Bounded Stochastic Bandits and Beyond

TLDR
In this article, the KL-UCB algorithm was shown to have a uniformly better regret bound than UCB and its variants, and reached the lower bound of Lai and Robbins.
Abstract
This paper presents a nite-time analysis of the KL-UCB algorithm, an online, horizonfree index policy for stochastic bandit problems. We prove two distinct results: rst, for arbitrary bounded rewards, the KL-UCB algorithm satises a uniformly better regret bound than UCB and its variants; second, in the special case of Bernoulli rewards, it reaches the lower bound of Lai and Robbins. Furthermore, we show that simple adaptations of the KL-UCB algorithm are also optimal for specic classes of (possibly unbounded) rewards, including those generated from exponential families of distributions. A large-scale numerical study comparing KL-UCB with its main competitors (UCB, MOSS, UCB-Tuned, UCB-V, DMED) shows that KL-UCB is remarkably ecient and stable, including for short time horizons. KL-UCB is also the only method that always performs better than the basic UCB policy. Our regret bounds rely on deviations results of independent interest which are stated and proved in the Appendix. As a by-product, we also obtain an improved regret bound for the standard UCB algorithm.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Regret Analysis of Stochastic and Nonstochastic Multi-Armed Bandit Problems

TL;DR: In this article, the authors focus on regret analysis in the context of multi-armed bandit problems, where regret is defined as the balance between staying with the option that gave highest payoff in the past and exploring new options that might give higher payoffs in the future.
Proceedings Article

Analysis of Thompson Sampling for the Multi-armed Bandit Problem

TL;DR: In this paper, the Thompson sampling algorithm achieves logarithmic expected regret for the stochastic multi-armed bandit problem, where the expected regret is O( lnT + 1 3 ).
Book ChapterDOI

Thompson sampling: an asymptotically optimal finite-time analysis

TL;DR: The question of the optimality of Thompson Sampling for solving the stochastic multi-armed bandit problem is answered positively for the case of Bernoulli rewards by providing the first finite-time analysis that matches the asymptotic rate given in the Lai and Robbins lower bound for the cumulative regret.
Proceedings Article

Further Optimal Regret Bounds for Thompson Sampling

TL;DR: A novel regret analysis for Thompson Sampling is provided that proves the first near-optimal problem-independent bound of O( √ NT lnT ) on the expected regret of this algorithm, and simultaneously provides the optimal problem-dependent bound.
Book ChapterDOI

On upper-confidence bound policies for switching bandit problems

TL;DR: An upperbound for the expected regret is established by upper-bounding the expectation of the number of times suboptimal arms are played and it is shown that the discounted UCB and the sliding-window UCB both match the lower-bound up to a logarithmic factor.
References
More filters
Journal ArticleDOI

Finite-time Analysis of the Multiarmed Bandit Problem

TL;DR: This work shows that the optimal logarithmic regret is also achievable uniformly over time, with simple and efficient policies, and for all reward distributions with bounded support.
Journal ArticleDOI

Sample mean based index policies by O(log n) regret for the multi-armed bandit problem

TL;DR: This paper constructs index policies that depend on the rewards from each arm only through their sample mean, and achieves a O(log n) regret with a constant that is based on the Kullback–Leibler number.
Journal ArticleDOI

Exploration-exploitation tradeoff using variance estimates in multi-armed bandits

TL;DR: A variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account the empirical variance of the different arms is considered, providing the first analysis of the expected regret for such algorithms.
Related Papers (5)