The Internet Archive has digitized a microfilm copy of this work. It may be possible to borrow a copy for reading.
Filters
Page 6635 of Mathematical Reviews Vol. , Issue 2001I
[page]
2001
Mathematical Reviews
)
On the power of learning robustly. ...
The additional learn- ing power of probabilistic learning machines can be compensated when we allow the machines to pose ‘questions’ to an oracle. ...
On the Hardness of Robust Classification
[article]
2019
arXiv
pre-print
Finally, we provide a simple proof of the computational hardness of robust learning on the boolean hypercube. ...
Unlike previous results of this nature, our result does not rely on another computational model (e.g. the statistical query model) nor on any hardness assumption other than the existence of a hard learning ...
On the other hand, a more powerful learning algorithm that has access to membership queries can exactly learn monotone conjunctions and as a result can also robustly learn with respect to exact in the ...
arXiv:1909.05822v1
fatcat:j5g5pwh7mfdvnlsjtpexjpjvga
Robust Learning Aided by Context
2000
Journal of computer and system sciences (Print)
So, if one considers proper subsets S/REC, it may happen that one loses learning power, just because the space of possible contexts is reduced. ...
Of course, on intuitive grounds, it is to be expected that the learning power can be further increased if the context given to the learner is not arbitrary, but is carefully selected. ...
doi:10.1006/jcss.1999.1637
fatcat:wdbztuag5jfdjnbg6yikev6hoq
A Tour of Robust Learning
[chapter]
2003
Computability and Models
The present work surveys research on robust learning and focuses on the recently introduced variants of uniformly robust and hyperrobust learning. ...
Bārzdiņš conjectured that only recursively enumerable classes of functions can be learned robustly. ...
Further thanks go to the co-authors (John Case, Wolfgang Merkle, Matthias Ott, Arun Sharma, Carl Smith, Rolf Wiehagen and Thomas Zeugmann) of joint research reported in this survey. ...
doi:10.1007/978-1-4615-0755-0_9
fatcat:237kenrjobfhhkyh2hnosrccga
Robust learning aided by context
1998
Proceedings of the eleventh annual conference on Computational learning theory - COLT' 98
Empirical studies of multitask learning provide some evidence that the performance of a learning system on its intended targets improves by presenting to the learning system related tasks, also called ...
However, their proofs rely heavily on self-referential coding tricks, that is, they directly code the solution of the learning problem into the context. ...
So, if one considers proper subsets S ⊂ REC , it may happen that one loses learning power, just because the space of possible contexts is reduced. ...
doi:10.1145/279943.279952
dblp:conf/colt/CaseJOSS98
fatcat:nhok3hto5feuxbqzbo3hunfqay
Page 5206 of Mathematical Reviews Vol. , Issue 2002G
[page]
2002
Mathematical Reviews
They exhibit some self-referential classes that may be learned robustly, contrary to previous conjecture, and remark upon the topological structure of robustly learnable classes. ...
In many cases the language generating power does not change, as in the case of OL and TOL systems. ...
Risks from Learned Optimization in Advanced Machine Learning Systems
[article]
2021
arXiv
pre-print
We believe that the possibility of mesa-optimization raises two important questions for the safety and transparency of advanced machine learning systems. ...
We analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer - a situation we refer to as mesa-optimization, a neologism we introduce in ...
Then, let x be the optimization power applied by the learned algorithm in each environment instance and f (x) the total amount of optimization power the base optimizer must put in to get a learned algorithm ...
arXiv:1906.01820v3
fatcat:zxjkbx3zlfh3ldp3jwe6nzr5pq
Robustly-reliable learners under poisoning attacks
[article]
2022
arXiv
pre-print
make a user no longer trust the results of a learning system. ...
We provide robustly-reliable predictions, in which the predicted label is guaranteed to be correct so long as the adversary has not exceeded a given corruption budget, even in the presence of instance ...
This material is based on work supported by the National Science Foundation under grants CCF-1910321, IIS-1901403, SES-1919453, and CCF-1815011; an AWS Machine Learning Research Award; an Amazon Research ...
arXiv:2203.04160v1
fatcat:frna56o5g5cyjl3pj4nrnqkjsu
RoNGBa: A Robustly Optimized Natural Gradient Boosting Training Approach with Leaf Number Clipping
[article]
2019
arXiv
pre-print
Experiments show that our approach significantly beats the state-of-the-art performance on various kinds of datasets from the UCI Machine Learning Repository while still has up to 4.85x speed up compared ...
We present a replication study of NGBoost(Duan et al., 2019) training that carefully examines the impacts of key hyper-parameters under the circumstance of best-first decision tree learning. ...
Acknowledgments We want to thank Michal Moshkovitz and Joseph Geumlek for the early discussions of the project. ...
arXiv:1912.02338v1
fatcat:esugpctwlrd33kkqel6jucl73y
Avoiding coding tricks by hyperrobust learning
2002
Theoretical Computer Science
Hyperrobust BC-learning as well as the hyperrobust version of Ex-learning by teams are more powerful than hyperrobust Ex-learning. ...
The present work introduces and justiÿes the notion of hyperrobust learning where one ÿxed learner has to learn all functions in a given class plus their images under primitive recursive operators. ...
Acknowledgements The authors would like to thank Kejia Joyce Ho and the anonymous referees for proofreading and for their commentary. ...
doi:10.1016/s0304-3975(01)00086-x
fatcat:3rfrrrt5pfd6nc4zy6kzivk4ae
Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness
[article]
2022
arXiv
pre-print
Barely robust learning algorithms learn predictors that are adversarially robust only on a small fraction β≪ 1 of the data distribution. ...
We present an oracle-efficient algorithm for boosting the adversarial robustness of barely robust learners. ...
This work was supported in part by DARPA under cooperative agreement HR00112020003. 1 This work was supported in part by the National Science Foundation under grant CCF-1815011. ...
arXiv:2202.05920v1
fatcat:5ryxmj57xbh5xlj4me63dnfx7y
Page 7405 of Mathematical Reviews Vol. , Issue 2001J
[page]
2001
Mathematical Reviews
The problem of learning pattern languages from positive data has been well studied, usually with emphasis on the worst-case com- plexity of learning. ...
Fulk has shown that for the Ex- and Bc-anomaly hierarchies, such results, which rely on self-referential coding tricks, do not hold robustly. ...
Hierarchical Compositional Representations of Object Structure
[chapter]
2012
Lecture Notes in Computer Science
The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another. ...
for learning the representation from a set of input images with as little supervision as possible; and an effective inference algorithm that robustly matches the object representation against the image ...
The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another. ...
doi:10.1007/978-3-642-34166-3_3
fatcat:myz4tninbfbnthdg56bacajowm
On the Uniform Learnability of Approximations to Non-recursive Functions
[chapter]
1999
Lecture Notes in Computer Science
Since the denition of the class B is quite natural and does not contain any self-referential coding, B serves as an example that the notion of robustness for learning is quite more restrictive than intended ...
These investigations are carried on by showing that B is neither in NUM nor robustly EX-learnable. ...
Clearly, what one is really interested in are powerful learning algorithms that cannot only learn one function but all functions from a given class of functions. ...
doi:10.1007/3-540-46769-6_23
fatcat:w72vp7j77fgy5m353gqp45ktwy
VC Classes are Adversarially Robustly Learnable, but Only Improperly
[article]
2019
arXiv
pre-print
We study the question of learning an adversarially robust predictor. We show that any hypothesis class H with finite VC dimension is robustly PAC learnable with an improper learning rule. ...
The requirement of being improper is necessary as we exhibit examples of hypothesis classes H with finite VC dimension that are not robustly PAC learnable with any proper learning rule. ...
A natural question to ask, based on the definition of robust PAC learning, is what is a necessary and sufficient condition on H that implies that it is robustly PAC learnable with respect to adversary ...
arXiv:1902.04217v2
fatcat:r2dmtirarnaoxhs4ljojlohcsm
« Previous
Showing results 1 — 15 out of 45,597 results