A kernelized maximal-figure-of-merit learning approach based on subspace distance minimization

Byungki Byun, Chin-Hui Lee
2011 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
We propose a kernelized maximal-figure-of-merit (MFoM) learning approach to efficiently training a nonlinear model using subspace distance minimization. In particular, a fixed, small number of training samples are chosen in a way that the distance between function spaces constructed with a subset of training samples and with the entire training data set is minimized. This construction of the subset enables us to learn a nonlinear model efficiently while keeping the resulting model nearly
more » ... compared to the model from the whole training data set. We show that the subspace distance can be minimized through the Nyström extension. Experimental results on various machine learning problems demonstrate clear advantages of the proposed technique over the case where the function space is built with randomly selected training samples. Additional comparisons with the model trained with the entire training samples show that the proposed technique achieves comparable results while reducing training time tremendously.
doi:10.1109/icassp.2011.5946732 dblp:conf/icassp/ByunL11 fatcat:4tt2iyud6fcyvkbwiwt37w33re