Abstract
We address online combinatorial optimization when the player has a prior over
the adversary's sequence of losses. In this framework, Russo and Van Roy
proposed an information-theoretic analysis of Thompson Sampling based on the
information ratio, resulting in optimal worst-case regret bounds. In this
paper we introduce three novel ideas to this line of work. First we propose a
new quantity, the scale-sensitive information ratio, which allows us to obtain
more refined first-order regret bounds (i.e., bounds of the form $L^*$
where $L^*$ is the loss of the best combinatorial action). Second we replace
the entropy over combinatorial actions by a coordinate entropy, which allows us
to obtain the first optimal worst-case bound for Thompson Sampling in the
combinatorial setting. Finally, we introduce a novel link between Bayesian
agents and frequentist confidence intervals. Combining these ideas we show that
the classical multi-armed bandit first-order regret bound $O(d
L^*)$ still holds true in the more challenging and more general semi-bandit
scenario. This latter result improves the previous state of the art bound
$O((d+m^3)L^*)$ by Lykouris, Sridharan and Tardos.
Users
Please
log in to take part in the discussion (add own reviews or comments).