A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Adversarial Robustness Guarantees for Classification with Gaussian Processes
[article]
2020
arXiv
pre-print
We investigate adversarial robustness of Gaussian Process Classification (GPC) models. Given a compact subset of the input space T⊆R^d enclosing a test point x^* and a GPC trained on a dataset D, we aim to compute the minimum and the maximum classification probability for the GPC over all the points in T. In order to do so, we show how functions lower- and upper-bounding the GPC output in T can be derived, and implement those in a branch and bound optimisation algorithm. For any error threshold
arXiv:1905.11876v3
fatcat:sfyexiphabdxtcu7aqyi6s2nli