Kernel methods for data analysis [thesis]

Nathan Douglas Pearce
In this thesis, statistical inference is made using reproducing kernel methods. Kernel methods are a versatile tool, and allow a seamless transition from parametric to nonparametric methods. A flagship example of the use of kernel methods is the support vector machine. An effective and elegant method for classification problems, the support vector machine is one of many applications of kernel methods. By embedding penalised splines within kernel methods, the support vector machine is given an
more » ... terpretable, even additive structure. Addressed in detail are the large scale computational issues involved with support vector machines. Kernel methods are further used to make explicit links to longitudinal data analysis. In doing so, the broad kernel machine methodology can incorporate the repeated measurements of longitudinal data analysis. Additionally, the links are made explicit between the degrees of freedom and kernel methods. Bayes methodology is addressed with kernel methods. A variational Bayes approach is used for linear mixed models and generalised linear mixed models. The approach is shown to be computationally efficient. Moreover, classical methods such as restricted maximum likelihood and penalised quasi-likelihood are shown to be special cases of variational Bayes. The final chapter of this thesis addresses the issue of model selection with only minimal assumptions. The robustness of such an approach is verified through extensive testing.
doi:10.26190/unsworks/14927 fatcat:wgzfvfw4fnhylcg7qgk4fz23wq