Alpha-Beta Divergence For Variational Inference [article]

Jean-Baptiste Regli, Ricardo Silva
2018 arXiv   pre-print
This paper introduces a variational approximation framework using direct optimization of what is known as the scale invariant Alpha-Beta divergence (sAB divergence). This new objective encompasses most variational objectives that use the Kullback-Leibler, the Rényi or the gamma divergences. It also gives access to objective functions never exploited before in the context of variational inference. This is achieved via two easy to interpret control parameters, which allow for a smooth
more » ... n over the divergence space while trading-off properties such as mass-covering of a target distribution and robustness to outliers in the data. Furthermore, the sAB variational objective can be optimized directly by repurposing existing methods for Monte Carlo computation of complex variational objectives, leading to estimates of the divergence instead of variational lower bounds. We show the advantages of this objective on Bayesian models for regression problems.
arXiv:1805.01045v2 fatcat:ss7gw6fsxjfw3ebom6okysv6a4