173 Hits in 5.7 sec

Learning Stochastic Majority Votes by Minimizing a PAC-Bayes Generalization Bound [article]

Valentina Zantedeschi, Paul Viallard, Emilie Morvant, Rémi Emonet, Amaury Habrard, Pascal Germain, Benjamin Guedj
2021 arXiv   pre-print
The resulting stochastic majority vote learning algorithm achieves state-of-the-art accuracy and benefits from (non-vacuous) tight generalization bounds, in a series of numerical experiments when compared  ...  We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties.  ...  Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as  ... 
arXiv:2106.12535v2 fatcat:3z5pbotwj5dczexorfl7irbfte

From PAC-Bayes Bounds to KL Regularization

Pascal Germain, Alexandre Lacasse, François Laviolette, Mario Marchand, Sara Shanian
2009 Neural Information Processing Systems  
We show that convex KL-regularized objective functions are obtained from a PAC-Bayes risk bound when using convex loss functions for the stochastic Gibbs classifier that upper-bound the standard zero-one  ...  loss used for the weighted majority vote.  ...  Acknowledgments Work supported by NSERC discovery grants 122405 (M.M.) and 262067 (F.L.).  ... 
dblp:conf/nips/GermainLLMS09 fatcat:5go3wzp36fdy5o4kdylcx5saju

Learning with Randomized Majority Votes [chapter]

Alexandre Lacasse, François Laviolette, Mario Marchand, Francis Turgeon-Boutin
2010 Lecture Notes in Computer Science  
The learning algorithms minimize a risk bound which is convex in the weights.  ...  We propose algorithms for producing weighted majority votes that learn by probing the empirical risk of a randomized (uniformly weighted) majority vote-instead of probing the zero-one loss, at some margin  ...  Acknowledgments Work supported by NSERC discovery grants 122405 and 262067.  ... 
doi:10.1007/978-3-642-15883-4_11 fatcat:zo3ierfkabhtjar4i7f2556rlu

On Margins and Generalisation for Voting Classifiers [article]

Felix Biggs, Valentina Zantedeschi, Benjamin Guedj
2022 arXiv   pre-print
We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory.  ...  Our central results leverage the Dirichlet posteriors studied recently by Zantedeschi et al. [2021] for training voting classifiers; in contrast to that work our bounds apply to non-randomised votes via  ...  Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as  ... 
arXiv:2206.04607v1 fatcat:azf5kflijvgfja3ocn2xwz3ym4

A PAC-Bayes Sample-compression Approach to Kernel Methods

Pascal Germain, Alexandre Lacoste, François Laviolette, Mario Marchand, Sara Shanian
2011 International Conference on Machine Learning  
We provide novel risk bounds for these majority votes and learning algorithms that minimize these bounds.  ...  of a more general class of data-dependent classifiers known as majority votes of samplecompressed classifiers.  ...  Inspired by the work of Germain et al. (2009) on general loss bounds for stochastic classifiers, we propose two different PAC-Bayes risk bounds for majority votes of sample-compressed classifiers which  ... 
dblp:conf/icml/GermainLLMS11 fatcat:i6vzsraqtrfafgt6fkmxq75osa

A PAC-Bayes Risk Bound for General Loss Functions

Pascal Germain, Alexandre Lacasse, François Laviolette, Mario Marchand
2006 Neural Information Processing Systems  
We provide a PAC-Bayesian bound for the expected loss of convex combinations of classifiers under a wide class of loss functions (which includes the exponential loss and the logistic loss).  ...  Our numerical experiments with Adaboost indicate that the proposed upper bound, computed on the training set, behaves very similarly as the true loss estimated on the testing set.  ...  Acknowledgments Work supported by NSERC Discovery grants 262067 and 122405.  ... 
dblp:conf/nips/GermainLLM06 fatcat:sbkrfkz4dzcflnyoz3mxfs7jx4

PAC-Bayesian Generalization Bound on Confusion Matrix for Multi-Class Classification [article]

Emilie Morvant
2013 arXiv   pre-print
In this work, we propose a PAC-Bayes bound for the generalization risk of the Gibbs classifier in the multi-class classification framework.  ...  To the best of our knowledge, this is the first PAC-Bayes bounds based on confusion matrices.  ...  For instance, in the binary PAC-Bayes setting, the algorithm MinCq proposed by Laviolette et al. (2011) minimizes a bound depending on the first two moments of the margin of the Q-weighted majority vote  ... 
arXiv:1202.6228v6 fatcat:rk75zalmprbd7l5xdzyidvte74

PAC-Bayes Control: Learning Policies that Provably Generalize to Novel Environments [article]

Anirudha Majumdar, Alec Farid, Anoopkumar Sonar
2020 arXiv   pre-print
In particular, we utilize the Probably Approximately Correct (PAC)-Bayes framework, which allows us to obtain upper bounds that hold with high probability on the expected cost of (stochastic) control policies  ...  The key technical idea behind our approach is to leverage tools from generalization theory in machine learning by exploiting a precise analogy (which we present in the form of a reduction) between generalization  ...  Techniques for converting stochastic hypotheses into deterministic hypotheses have been developed within the PAC-Bayes framework (e.g., using majority voting in the classification setting [45, 42] );  ... 
arXiv:1806.04225v5 fatcat:dsqjkqu4ubfobolbofgfk3mv4m

PAC-Bayesian learning of linear classifiers

Pascal Germain, Alexandre Lacasse, François Laviolette, Mario Marchand
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
We present a general PAC-Bayes theorem from which all known PAC-Bayes risk bounds are obtained as particular cases.  ...  We also propose different learning algorithms for finding linear classifiers that minimize these bounds. These learning algorithms are generally competitive with both AdaBoost and the SVM.  ...  Acknowledgements Work supported by NSERC Discovery grants 262067 and 0122405.  ... 
doi:10.1145/1553374.1553419 dblp:conf/icml/GermainLLM09 fatcat:vfrv5sh3b5fmxoocdrqfkxc3di

PAC-Bayes Analysis Beyond the Usual Bounds [article]

Omar Rivasplata, Ilja Kuzborskij, Csaba Szepesvari, John Shawe-Taylor
2020 arXiv   pre-print
Specifically, we present a basic PAC-Bayes inequality for stochastic kernels, from which one may derive extensions of various known PAC-Bayes bounds as well as novel bounds.  ...  We focus on a stochastic learning model where the learner observes a finite set of training examples and the output of the learning process is a data-dependent distribution over a space of hypotheses.  ...  Risk Bounds for the Majority Vote: From a PAC-Bayesian Analysis to a Learning Algorithm. Journal of Machine Learning Research, 16:787-860, 2015. P. Germain, F. Bach, A. Lacoste, and S.  ... 
arXiv:2006.13057v3 fatcat:e3abu75fhjeyrgow344obqjv7m

Dichotomize and Generalize: PAC-Bayesian Binary Activated Deep Neural Networks [article]

Gaël Letarte and Pascal Germain and Benjamin Guedj and François Laviolette
2020 arXiv   pre-print
Our contributions are twofold: (i) we develop an end-to-end framework to train a binary activated deep neural network, (ii) we provide nonvacuous PAC-Bayesian generalization bounds for binary activated  ...  Our results are obtained by minimizing the expected loss of an architecture-dependent aggregation of binary activated deep neural networks.  ...  This work was supported in part by the French Project APRIORI ANR-18-CE23-0015, in part by NSERC and in part by Intact Financial Corporation.  ... 
arXiv:1905.10259v5 fatcat:gsqne5nv4rd67actnsblneyd2q

Tighter risk certificates for neural networks [article]

María Pérez-Ortiz and Omar Rivasplata and John Shawe-Taylor and Csaba Szepesvári
2021 arXiv   pre-print
We also re-implement a previously used training objective based on a classical PAC-Bayes bound, to compare the properties of the predictors learned using the different training objectives.  ...  These two training objectives are derived from tight PAC-Bayes bounds.  ...  We rigorously study and illustrate 'PAC-Bayes with Backprop' (PBB), a generic strategy to derive (probabilistic) neural network training methods from PAC-Bayes bounds. 2.  ... 
arXiv:2007.12911v3 fatcat:efoankqx6vbwvdeh34k76mijdm

Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory [article]

Ron Amit, Ron Meir
2019 arXiv   pre-print
We present a framework for meta-learning that is based on generalization error bounds, allowing us to extend various PAC-Bayes bounds to meta-learning.  ...  We develop a gradient-based algorithm which minimizes an objective function derived from the bounds and demonstrate its effectiveness numerically with deep neural networks.  ...  The work was partially supported by the Ollendorff Center of the Viterbi Faculty of Electrical Engineering at the Technion.  ... 
arXiv:1711.01244v8 fatcat:uvbegjw6erezjebbinjrrejzpe

PAC-Bayes with Minimax for Confidence-Rated Transduction [article]

Akshay Balsubramani, Yoav Freund
2015 arXiv   pre-print
By using PAC-Bayes analysis on these rules, we obtain data-dependent performance guarantees without distributional assumptions on the data.  ...  Our analysis techniques are readily extended to a setting in which the predictor is allowed to abstain.  ...  with the majority vote (ours can be stochastic).  ... 
arXiv:1501.03838v1 fatcat:zk5o4sz2gfglbo3mpsobtbligy

Domain adaptation of weighted majority votes via perturbed variation-based self-labeling

Emilie Morvant
2015 Pattern Recognition Letters  
In non-DA supervised setting, a theoretical bound - the C-bound - involves this disagreement and leads to a majority vote learning algorithm: MinCq.  ...  This arrives when one desires to learn, from a source distribution, a good weighted majority vote (over a set of classifiers) on a different target distribution.  ...  In the supervised setting, Laviolette et al. (2011) have then proposed to minimize the empirical counterpart of the C-bound for learning a good majority vote over H, justified by an elegant PAC-Bayesian  ... 
doi:10.1016/j.patrec.2014.08.013 fatcat:2uutvv2xofhm3jkapdk7g7pyre
« Previous Showing results 1 — 15 out of 173 results