Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit
Problems
S. Bubeck, and N. Cesa-Bianchi. (2012)cite arxiv:1204.5721Comment: To appear in Foundations and Trends in Machine Learning.
Abstract
Multi-armed bandit problems are the most basic examples of sequential
decision problems with an exploration-exploitation trade-off. This is the
balance between staying with the option that gave highest payoffs in the past
and exploring new options that might give higher payoffs in the future.
Although the study of bandit problems dates back to the Thirties,
exploration-exploitation trade-offs arise in several modern applications, such
as ad placement, website optimization, and packet routing. Mathematically, a
multi-armed bandit is defined by the payoff process associated with each
option. In this survey, we focus on two extreme cases in which the analysis of
regret is particularly simple and elegant: i.i.d. payoffs and adversarial
payoffs. Besides the basic setting of finitely many actions, we also analyze
some of the most important variants and extensions, such as the contextual
bandit model.
Description
[1204.5721] Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems
%0 Journal Article
%1 bubeck2012regret
%A Bubeck, Sébastien
%A Cesa-Bianchi, Nicolò
%D 2012
%K bandits online-learning optimization readings survey
%T Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit
Problems
%U http://arxiv.org/abs/1204.5721
%X Multi-armed bandit problems are the most basic examples of sequential
decision problems with an exploration-exploitation trade-off. This is the
balance between staying with the option that gave highest payoffs in the past
and exploring new options that might give higher payoffs in the future.
Although the study of bandit problems dates back to the Thirties,
exploration-exploitation trade-offs arise in several modern applications, such
as ad placement, website optimization, and packet routing. Mathematically, a
multi-armed bandit is defined by the payoff process associated with each
option. In this survey, we focus on two extreme cases in which the analysis of
regret is particularly simple and elegant: i.i.d. payoffs and adversarial
payoffs. Besides the basic setting of finitely many actions, we also analyze
some of the most important variants and extensions, such as the contextual
bandit model.
@article{bubeck2012regret,
abstract = {Multi-armed bandit problems are the most basic examples of sequential
decision problems with an exploration-exploitation trade-off. This is the
balance between staying with the option that gave highest payoffs in the past
and exploring new options that might give higher payoffs in the future.
Although the study of bandit problems dates back to the Thirties,
exploration-exploitation trade-offs arise in several modern applications, such
as ad placement, website optimization, and packet routing. Mathematically, a
multi-armed bandit is defined by the payoff process associated with each
option. In this survey, we focus on two extreme cases in which the analysis of
regret is particularly simple and elegant: i.i.d. payoffs and adversarial
payoffs. Besides the basic setting of finitely many actions, we also analyze
some of the most important variants and extensions, such as the contextual
bandit model.},
added-at = {2019-12-06T20:34:12.000+0100},
author = {Bubeck, Sébastien and Cesa-Bianchi, Nicolò},
biburl = {https://www.bibsonomy.org/bibtex/2baa6ccf9d94e79b4c19556afa4e0b09b/kirk86},
description = {[1204.5721] Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems},
interhash = {a14cb392a5748926025acac57e9cace1},
intrahash = {baa6ccf9d94e79b4c19556afa4e0b09b},
keywords = {bandits online-learning optimization readings survey},
note = {cite arxiv:1204.5721Comment: To appear in Foundations and Trends in Machine Learning},
timestamp = {2019-12-06T20:34:12.000+0100},
title = {Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit
Problems},
url = {http://arxiv.org/abs/1204.5721},
year = 2012
}