A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
(Sequential) Importance Sampling Bandits
[article]
2019
arXiv
pre-print
This work extends existing multi-armed bandit (MAB) algorithms beyond their original settings by leveraging advances in sequential Monte Carlo (SMC) methods from the approximate inference community. We leverage Monte Carlo estimation and, in particular, the flexibility of (sequential) importance sampling to allow for accurate estimation of the statistics of interest within the MAB problem. The MAB is a sequential allocation task where the goal is to learn a policy that maximizes long term
arXiv:1808.02933v3
fatcat:gdvmt7bujbcsplk2ldyijzk34y