A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is
Stationary one-dimensional Gaussian process models in machine learning can be reformulated as state space equations. This reduces the cubic computational complexity of the naive full GP solution to linear with respect to the number of training data points. For infinitely differentiable covariance functions the representation is an approximation. In this paper, we study a class of covariance functions that can be represented as a scale mixture of squared exponentials. We show how the generalizeddoi:10.1109/mlsp.2014.6958899 dblp:conf/mlsp/SolinS14 fatcat:4zznrjhl5rabfazguhh5bnbgcq