Efficient approximate leave-one-out cross-validation for kernel logistic regression

Gavin C. Cawley, Nicola L. C. Talbot
2008 Machine Learning  
Kernel logistic regression (KLR) is the kernel learning method best suited to binary pattern recognition problems where estimates of a-posteriori probability of class membership are required. Such problems occur frequently in practical applications, for instance because the operational prior class probabilities or equivalently the relative misclassification costs are variable or unknown at the time of training the model. The model parameters are given by the solution of a convex optimisation
more » ... blem, which may be found via an efficient iteratively re-weighted least squares (IRWLS) procedure. The generalisation properties of a kernel logistic regression machine are however governed by a small number of hyper-parameters, the values of which must be determined during the process of model selection. In this paper, we propose a novel model selection strategy for KLR, based on a computationally efficient closed-form approximation of the leave-one-out crossvalidation procedure. Results obtained on a variety of synthetic and real-world benchmark datasets are given, demonstrating that the proposed model selection procedure is competitive with a more conventional k-fold cross-validation based approach and also with Gaussian process (GP) classifiers implemented using the Laplace approximation and via the Expectation Propagation (EP) algorithm.
doi:10.1007/s10994-008-5055-9 fatcat:lwyqgrujrrbarhbn7p6ud3xdiq