Abstract
We consider a setting where multiple players sequentially choose among a
common set of actions (arms). Motivated by a cognitive radio networks
application, we assume that players incur a loss upon colliding, and that
communication between players is not possible. Existing approaches assume that
the system is stationary. Yet this assumption is often violated in practice,
e.g., due to signal strength fluctuations. In this work, we design the first
Multi-player Bandit algorithm that provably works in arbitrarily changing
environments, where the losses of the arms may even be chosen by an adversary.
This resolves an open problem posed by Rosenski, Shamir, and Szlak (2016).
Users
Please
log in to take part in the discussion (add own reviews or comments).