Manifold Learning via Multi-Penalty Regularization

Abhishake Rastogi
2017 International Journal of Artificial Intelligence & Applications  
Manifold regularization is an approach which exploits the geometry of the marginal distribution. The main goal of this paper is to analyze the convergence issues of such regularization algorithms in learning theory. We propose a more general multi-penalty framework and establish the optimal convergence rates under the general smoothness assumption. We study a theoretical analysis of the performance of the multi-penalty regularization over the reproducing kernel Hilbert space. We discuss the
more » ... We discuss the error estimates of the regularization schemes under some prior assumptions for the joint probability measure on the sample space. We analyze the convergence rates of learning algorithms measured in the norm in reproducing kernel Hilbert space and in the norm in Hilbert space of square-integrable functions. The convergence issues for the learning algorithms are discussed in probabilistic sense by exponential tail inequalities. In order to optimize the regularization functional, one of the crucial issue is to select regularization parameters to ensure good performance of the solution. We propose a new parameter choice rule "the penalty balancing principle" based on augmented Tikhonov regularization for the choice of regularization parameters. The superiority of multi-penalty regularization over single-penalty regularization is shown using the academic example and moon data set.
doi:10.5121/ijaia.2017.8506 fatcat:amdlbpfuwrhkblop4gdmcre5da