Large margin kernel pocket algorithm

Jianhua Xu, Xuegong Zhang, Yanda Li
IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)  
Two attractive advantages of SVM are the ideas of kernels and of large margin. As a linear learning machine, the original pocket algorithm can handle both linearly and nonlinearly separable problems. In order to improve its classification ability and control its generalization, we generalize the original pocket algorithm by using kernels and adding a margin criterion, and propose its kernel and large margin version, which can be referred to as large margin kernel pocket algorithm or LMKPA. The
more » ... ithm or LMKPA. The objective is to maximize both the number of correctly classified samples and the distance between the separating hyperplane and those correctly classified samples closest to the hyperplane, in the feature space realized with the kernels. This new algorithm only utilizes an iterative procedure to implement kernel idea and large margin simultaneously. For the linearly separable problems, LMKPA can find a solution that is not only without error, but also almost equivalent to that of SVM with the large-margin goal. For linearly non-separable problems, its performance is also very close to that of SVM. Experiments in numeral computation aspects show that the performance of LMKPA is close to that of SVM but the algorithm is much simpler.
doi:10.1109/ijcnn.2001.939581 fatcat:n4x2g3chmnfzxbbl2itgzmttba