Filters








134 Hits in 4.5 sec

Rademacher Complexity Bounds for a Penalized Multi-class Semi-supervised Algorithm

Yury Maximov, Massih-Reza Amini, Zaid Harchaoui
2018 The Journal of Artificial Intelligence Research  
We propose Rademacher complexity bounds for multi-class classifiers trained with a two-step semi-supervised model.  ...  their non-predominant classes is below a fixed threshold stands for clustering consistency.  ...  Theorem 11 (Multi-class Rademacher generalization bounds; remark 6 (Lei et al., 2015) ) Let F H ⊂ R X ×Y be a hypothesis class with Y = {1, . . . , K}.  ... 
doi:10.1613/jair.5638 fatcat:wxloymbtyfbnrhwyl6a4xztfim

Rademacher Complexity Bounds for a Penalized Multi-class Semi-supervised Algorithm (Extended Abstract)

Yury Maximov, Massih-Reza Amini, Zaid Harchaoui
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
We propose Rademacher complexity bounds for multi-class classifiers trained with a two-step semi-supervised model.  ...  their non-predominant classes is below a fixed threshold stands for clustering consistency.  ...  The pseudo-code of the proposed 2-step approach, referred to as Penalized Multi-Class Semi-Supervised Learning (PMS 2 L), is given in Algorithm 1.  ... 
doi:10.24963/ijcai.2018/800 dblp:conf/ijcai/MaximovAH18 fatcat:ey5qlhwnsfafrfyxqy447wu4o4

Rademacher Complexity Bounds for a Penalized Multiclass Semi-Supervised Algorithm [article]

Yury Maximov, Massih-Reza Amini, Zaid Harchaoui
2018 arXiv   pre-print
We propose Rademacher complexity bounds for multiclass classifiers trained with a two-step semi-supervised model.  ...  The resulting data-dependent generalization error bound involves the margin distribution of the classifier, the stability of the clustering technique used in the first step and Rademacher complexity terms  ...  This work has been partially supported by the THANATOS project funded by Appel à projets Grenoble Innovation Recherche.  ... 
arXiv:1607.00567v3 fatcat:fsortvjuxnclja3bjgxrwxtbcu

Generalization Error Bounds Using Unlabeled Data [chapter]

Matti Kääriäinen
2005 Lecture Notes in Computer Science  
We present two new methods for obtaining generalization error bounds in a semi-supervised setting.  ...  The result is a semi-supervised bound for classifiers learned based on all the labeled data. The bound is easy to implement and apply and should be tight whenever cross-validation makes sense.  ...  I wish to thank John Langford, Jyrki Kivinen, Anssi Kääriäinen, and Taneli Mielikäinen for helpful discussions.  ... 
doi:10.1007/11503415_9 fatcat:gq2g6hygfrbgpdjpdrrivrbaam

Sparse Group Inductive Matrix Completion [article]

Ivan Nazarov, Boris Shirokikh, Maria Burkina, Gennady Fedonin and Maxim Panov
2018 arXiv   pre-print
We demonstrate, that the theoretical sample complexity for the proposed method is much lower compared to its competitors in sparse problems, and propose an efficient optimization algorithm for the resulting  ...  We incorporate feature selection into inductive matrix completion by proposing a matrix factorization framework with group-lasso regularization on side feature parameter matrices.  ...  Thus, to bound the risk we need to bound the Rademacher complexity R m (F).  ... 
arXiv:1804.10653v2 fatcat:4sl4wwou45f4ld43eq7b6utluy

Multi-view transfer learning with a large margin approach

Dan Zhang, Jingrui He, Yan Liu, Luo Si, Richard Lawrence
2011 Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '11  
For example, a web page can be described by its contents and its associated links.  ...  However, most existing transfer learning methods fail to capture the multi-view nature, and might not be best suited for such applications.  ...  Vishwanathan (Purdue University), and the anonymous reviewers for their valuable comments and suggestions.  ... 
doi:10.1145/2020408.2020593 dblp:conf/kdd/ZhangHLSL11 fatcat:4kio4uhlpjhldbdoes7r6uoa6u

Multi-view Metric Learning in Vector-valued Kernel Spaces [article]

Riikka Huusari , Cécile Capponi
2018 arXiv   pre-print
We consider the problem of metric learning for multi-view data and present a novel method for learning within-view as well as between-view metrics in vector-valued kernel spaces, as a way to capture multi-modal  ...  An iterative three-step multi-view metric learning algorithm is derived from the optimization problems.  ...  Acknowledgements We thank the anonymous reviewers for their relevant and helpful comments. This work is granted by Lives Project (ANR-15-CE23-0026).  ... 
arXiv:1803.07821v1 fatcat:j5hlxuwffnelnhhdubk7ftg4vq

Q-MKL: Matrix-induced Regularization in Multi-Kernel Learning with Applications to Neuroimaging

Chris Hinrichs, Vikas Singh, Jiming Peng, Sterling C Johnson
2012 Advances in Neural Information Processing Systems  
We briefly discuss ramifications in terms of learning bounds (Rademacher complexity).  ...  Model complexity is typically controlled using various norm regularizations on the base kernel mixing coefficients.  ...  The bound in (5) shows that the Rademacher complexity R S (·) depends on u q , which is a norm on the traces of the base kernels.  ... 
pmid:25309107 pmcid:PMC4189130 fatcat:lqukvyqdfrgjracsmyass7gj3q

Multi-View Intact Space Learning [article]

Chang Xu, Dacheng Tao, Chao Xu
2019 arXiv   pre-print
We propose a new definition of multi-view stability and then derive the generalization error bound based on multi-view stability and Rademacher complexity, and show that the complementarity between multiple  ...  In this paper, we propose the Multi-view Intact Space Learning (MISL) algorithm, which integrates the encoded complementary information in multiple views to discover a latent intact representation of the  ...  ACKNOWLEDGMENT We authors greatly thank the handling Associate Editor and all the three anonymous reviewers for their constructive comments on this submission.  ... 
arXiv:1904.02340v1 fatcat:xaj76tqhirdkpagogjhdlja2ma

A Statistical Learning Theory Framework for Supervised Pattern Discovery [chapter]

Jonathan H. Huggins, Cynthia Rudin
2014 Proceedings of the 2014 SIAM International Conference on Data Mining  
The bounds for the second version of the problem are stated in terms of a new complexity measure, the quasi-Rademacher complexity. * MIT Department of EECS and CSAIL (jhuggins@mit.edu) † MIT Sloan School  ...  We discuss two versions of the problem and prove uniform risk bounds for both.  ...  Acknowledgements Thanks to Dylan Kotliar and Yakir Reshef for helpful discussions regarding personalized medicine and cancer genomics and to Peter Krafft for helpful comments.  ... 
doi:10.1137/1.9781611973440.58 dblp:conf/sdm/HugginsR14 fatcat:ok5bn3jrgjhfrdy6cn6qnxeoci

Algorithms and Theory for Supervised Gradual Domain Adaptation [article]

Jing Dong, Shiji Zhou, Baoxiang Wang, Han Zhao
2022 arXiv   pre-print
Our results are algorithm agnostic, general for a range of loss functions, and only depend linearly on the averaged learning error across the trajectory.  ...  The phenomenon of data distribution evolving over time has been observed in a range of applications, calling the needs of adaptive learning algorithms.  ...  For some complicated function classes, such as multi-layer neural networks, they also enjoy a sequential Rademacher complexity of order O( 1/nT ) (Rakhlin et al., 2015) .  ... 
arXiv:2204.11644v1 fatcat:hu4lt7dokjcfxpfdqjmbazpn5a

A Statistical Learning Theory Framework for Supervised Pattern Discovery [article]

Jonathan H. Huggins, Cynthia Rudin
2014 arXiv   pre-print
The bounds for the second version of the problem are stated in terms of a new complexity measure, the quasi-Rademacher complexity.  ...  We discuss two versions of the problem and prove uniform risk bounds for both.  ...  Acknowledgements Thanks to Dylan Kotliar and Yakir Reshef for helpful discussions regarding personalized medicine and cancer genomics and to Peter Krafft for helpful comments.  ... 
arXiv:1307.0802v2 fatcat:wiivcrz6ybbdnek7k7g22jb63m

Fast rates by transferring from auxiliary hypotheses

Ilja Kuzborskij, Francesco Orabona
2016 Machine Learning  
As a byproduct of our study, we also prove a new bound on the Rademacher complexity of the smooth loss class under weaker assumptions compared to previous works.  ...  We focus on a broad class of ERM-based linear algorithms that can be instantiated with any non-negative smooth loss function and any strongly convex regularizer.  ...  The last step is to give an upper-bound on the empirical Rademacher complexity of a class regularized by a strongly convex function.  ... 
doi:10.1007/s10994-016-5594-4 fatcat:xl5upzr435gdxgud6ivvpymoti

A Survey on Multi-view Learning [article]

Chang Xu, Dacheng Tao, Chao Xu
2013 arXiv   pre-print
In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different  ...  In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed.  ...  Assuming different views to be uncorrelated, Kloft and Blanchard (2011) derived a tighter upper bound by the local Rademacher complexities for the l p -norm MKL.  ... 
arXiv:1304.5634v1 fatcat:nnux76pyobdzhovzlcywxrzkty

A Survey on Multi-Task Learning [article]

Yu Zhang, Qiang Yang
2018 arXiv   pre-print
In this paper, we give a survey for MTL.  ...  In order to improve the performance of learning tasks further, MTL can be combined with other learning paradigms including semi-supervised learning, active learning, unsupervised learning, reinforcement  ...  For the multi-task sparse coding method shown in problem (3), Maurer et al. [14] analyze its generalization bound with the use of the Rademacher complexity.  ... 
arXiv:1707.08114v2 fatcat:6lrpe4nk45djbjyfjco7t4yfme
« Previous Showing results 1 — 15 out of 134 results