A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
Learning convex bodies under uniform distribution

1992
*
Information Processing Letters
*

*Learning*

*convex*bodies

*under*

*uniform*

*distribution*. Information Processing Letters 43 (lY92) 35-3'). ... We prove that the class of

*convex*bodies contained in a fixed (prescribed) bounded region R c lQd is PAC-learnable, if the positive examples are drawn according to the

*uniform*

*distribution*Df on the target ... class & of

*convex*bodies contained in a fixed bounded region R c IWd

*under*

*uniform*

*distribution*on the positive examples. The sample size 3.1 [l]. ...

##
###
Information Theoretic Guarantees for Empirical Risk Minimization with Applications to Model Selection and Large-Scale Optimization

2018
*
International Conference on Machine Learning
*

We prove that

dblp:conf/icml/Alabdulmohsin18
fatcat:bvl5jvusajdnhg43ku4ghqq4cq
*under*the Axiom of Choice, the existence of an ERM*learning*rule with a vanishing mutual information is equivalent to the assertion that the loss class has a finite VC dimension, thus bridging ... information theory with statistical*learning*theory. ... However, the true goal behind stochastic*convex*optimization in the machine*learning**setting*is not to compute the empirical risk minimizer ĥ per se but to estimate h . ...##
###
Learnability, Stability and Uniform Convergence

2010
*
Journal of machine learning research
*

We show that in this

dblp:journals/jmlr/Shalev-ShwartzSSS10
fatcat:uxm6usdfkzafri6456myrfq4oy
*setting*, there are non-trivial*learning*problems where*uniform*convergence does not hold, empirical risk minimization fails, and yet they are learnable using alternative mechanisms ... In this paper, we consider the General*Learning**Setting*(introduced by Vapnik), which includes most statistical*learning*problems as special cases. ... To justify the necessity of*uniform*convergence even in the General*Learning**Setting*, Vapnik attempted to show that in this*setting*, learnability with the ERM*learning*rule is equivalent to*uniform*convergence ...##
###
Learning Geometric Concepts via Gaussian Surface Area

2008
*
2008 49th Annual IEEE Symposium on Foundations of Computer Science
*

We study the learnability of

doi:10.1109/focs.2008.64
dblp:conf/focs/KlivansOS08
fatcat:dns7sl4qhbhbbcyepzw7lve5mu
*sets*in R n*under*the Gaussian*distribution*, taking Gaussian surface area as the "complexity measure" of the*sets*being*learned*. ... These results together show that Gaussian surface area essentially characterizes the computational complexity of*learning**under*the Gaussian*distribution*. ... I(f )/ǫ, and hence can be*learned**under*the*uniform**distribution*in time n I(f )/ǫ . ...##
###
The Perceptron Algorithm is Fast for Nonmalicious Distributions

1990
*
Neural Computation
*

In an appendix we show that, for

doi:10.1162/neco.1990.2.2.248
fatcat:a56lp3tgfjefhp66fk6zfxa6bi
*uniform**distributions*, some classes of infinite V-C dimension including*convex**sets*and a class of nested differences of*convex**sets*are learnable. ...*Under*this definition, the Perceptron algorithm is shown to be a*distribution*independent*learning*algorithm. ... For example, for the*uniform**distribution*, the class of*convex**sets*and a class of nested differences of*convex**sets*( both of which trivially have infinite V -C dimension) are shown to be learnable in ...##
###
Stability and Generalization of Decentralized Stochastic Gradient Descent

2021
*
PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE
*

We verify our theoretical findings by using a variety of decentralized

doi:10.1609/aaai.v35i11.17173
fatcat:4aqdj6l7r5hw5jwongxjzimud4
*settings*and benchmark machine*learning*models. ... Leveraging this formulation together with (non)*convex*optimization theory, we establish the first stability and generalization guarantees for the decentralized stochastic gradient descent. ... The*uniform*stability of empirical risk minimization (ERM)*under*strongly*convex*objectives is considered by Bousquet and Elisseeff (2002) . ...##
###
Robust Optimization for Non-Convex Objectives
[article]

2017
*
arXiv
*
pre-print

We show that de-randomizing this solution is NP-hard in general, but can be done for a broad class of statistical

arXiv:1707.01047v1
fatcat:fgmmjek7uzdarpdl22fbl77hna
*learning*tasks. ... We develop a reduction from robust improper optimization to Bayesian optimization: given an oracle that returns α-approximate solutions for*distributions*over objectives, we compute a*distribution*over ... Indeed, let D be any*distribution*over F, and suppose f i is any function with maximum probability*under*D. Then the*set*S = {a i1 , . . . , a ik } maximizes expected value*under*D. ...##
###
The Hedge Algorithm on a Continuum

2015
*
International Conference on Machine Learning
*

Finally, we propose a generalization to the dual averaging method on the

dblp:conf/icml/KricheneBTB15
fatcat:aeqcnkwj5nahlppjm3zauezlcm
*set*of Lebesgue-continuous*distributions*over S. ... We consider an online optimization problem on a compact subset S ⊂ R n (not necessarily*convex*), in which a decision maker chooses, at each iteration t, a probability*distribution*x (t) over S, and seeks ... The reference measure is the Lebesgue measure λ, and the initial*distribution*x (0) is the Lebesgueuniform*distribution*over S, i.e. x (0) (s) = 1 λ(S) . Sublinear Regret on*Convex**Sets*Lemma 3. ...##
###
Stability and Risk Bounds of Iterative Hard Thresholding

2021
*
International Conference on Artificial Intelligence and Statistics
*

excess risk; and 2) a fast rate of order Õ(n −1 k(log 3 (n) + log(p))) can be derived for strongly

dblp:conf/aistats/YuanL21
fatcat:uvunsleqizakdmrrg5uamhxc4q
*convex*risk function*under*certain strong-signal conditions. ... From the perspective of statistical*learning*theory, another fundamental question is how well the I-HT estimation would perform on unseen samples. ... )*under*Grant No.61876090 and No.61936005. ...##
###
Testing convexity of figures under the uniform distribution

2018
*
Random structures & algorithms (Print)
*

Our testing algorithm runs in time O( −4/3 ) and thus beats the Ω( −3/2 ) sample lower bound for

doi:10.1002/rsa.20797
fatcat:y3rvb2yev5bbfi23jm3wj3ok5q
*learning**convex*figures*under*the*uniform**distribution*from [26] . ... It shows that, with*uniform*samples, we can check if a*set*is approximately*convex*much faster than we can find an approximate representation of a*convex**set*. ...*uniform*samples can be done faster than*learning**convex*figures*under*the*uniform**distribution*. ...##
###
PAC-Bayesian Collective Stability

2014
*
International Conference on Artificial Intelligence and Statistics
*

We then derive a generalization bound for a class of structured predictors with variably

dblp:conf/aistats/LondonHTG14
fatcat:ljo74muzfzckhoi5ib2shw2tfa
*convex*inference, which suggests a novel*learning*objective that optimizes collective stability. ... Recent results have shown that the generalization error of structured predictors decreases with both the number of examples and the size of each example, provided the data*distribution*has weak dependence ... Government is authorized to reproduce and*distribute*reprints for governmental purposes notwithstanding any copyright annotation thereon. ...##
###
Making risk minimization tolerant to label noise

2015
*
Neurocomputing
*

assume the classes to be separable

doi:10.1016/j.neucom.2014.09.081
fatcat:hycmsefwz5galkc6bdvkc7ewvi
*under*noise-free data*distribution*. ... We prove a sufficient condition on a loss function for the risk minimization*under*that loss to be tolerant to*uniform*label noise. ...*Under*balanced training*set*, symmetric classes with*uniform*densities, SVM performs moderately well*under*noise. ...##
###
Stochastic Convex Optimization

2009
*
Annual Conference Computational Learning Theory
*

Our results demonstrate that the celebrated theorem of Alon et al on the equivalence of learnability and

dblp:conf/colt/Shalev-ShwartzSSS09
fatcat:5o2nz6pspjhcdch4uw6kc7wtpi
*uniform*convergence does not extend to Vapnik's General*Setting*of*Learning*, that in the General ... Inspired by recent regret bounds for online*convex*optimization, we study stochastic*convex*optimization, and uncover a surprisingly different situation in the more general*setting*: although the stochastic ... And as we mentioned, learnability (*under*the standard supervised*learning*model) is in fact equivalent to a*uniform*convergence property. ...##
###
Learning with Noisy Labels

2013
*
Neural Information Processing Systems
*

Second, by leveraging a reduction of risk minimization

dblp:conf/nips/NatarajanDRT13
fatcat:mv52dz3jsfbotje6ricpjutwq4
*under*noisy labels to classification with weighted 0-1 loss, we suggest the use of a simple weighted surrogate loss, for which we are able to obtain ...*Learning*with*convex*losses has been addressed only*under*limiting assumptions like separability or*uniform*noise rates [Manwani and Sastry, 2013] . ... To the best of our knowledge, we are the first to provide guarantees for risk minimization*under*random label noise in the general*setting*of*convex*surrogates, without any assumptions on the true*distribution*...##
###
One Size Fits All: Can We Train One Denoiser for All Noise Levels?
[article]

2020
*
arXiv
*
pre-print

For estimators with non-

arXiv:2005.09627v3
fatcat:4l3woql4uvffddwssg4q4optli
*convex*admissible*sets*such as deep neural networks, our dual formulation converges to a solution of the*convex*relaxation. ... We derive a dual ascent algorithm to determine the optimal sampling*distribution*of which the convergence is guaranteed as long as the*set*of admissible estimators is closed and*convex*. ... Acknowledgement The work is supported, in part, by the US National Science Foundation*under*grants CCF-1763896 and CCF-1718007. ...
« Previous

*Showing results 1 — 15 out of 63,196 results*