A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Rates of Convergence for Sparse Variational Gaussian Process Regression
[article]
2019
Excellent variational approximations to Gaussian process posteriors have been developed which avoid the O(N³) scaling with dataset size N. They reduce the computational cost to O(NM²), with M≪N being the number of inducing variables, which summarise the process. While the computational cost seems to be linear in N, the true complexity of the algorithm depends on how M must increase to ensure a certain quality of approximation. We address this by characterising the behavior of an upper bound on
doi:10.17863/cam.45147
fatcat:kpdhvocpt5bgvowr3ern36hcre