A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is
We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing. We prove two main results. First, we show that regularization via the Tsallis entropy, which includes EXP3 as a special case, achieves the Θ(√(TN)) minimax regret. Second, we show that a wide class of perturbation methods achieve a near-optimal regret as low as O(√(TN N)) if the perturbation distribution has a bounded hazard rate. For example,arXiv:1512.04152v1 fatcat:7xda44hse5hzdnm4v7ozfotzoy