Fighting Bandits with a New Kind of Smoothness [article]

Jacob Abernethy, Chansoo Lee, Ambuj Tewari
2015 arXiv   pre-print
We define a novel family of algorithms for the adversarial multi-armed bandit problem, and provide a simple analysis technique based on convex smoothing. We prove two main results. First, we show that regularization via the Tsallis entropy, which includes EXP3 as a special case, achieves the Θ(√(TN)) minimax regret. Second, we show that a wide class of perturbation methods achieve a near-optimal regret as low as O(√(TN N)) if the perturbation distribution has a bounded hazard rate. For example,
more » ... the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this key property.
arXiv:1512.04152v1 fatcat:7xda44hse5hzdnm4v7ozfotzoy