707 Hits in 2.7 sec

On agnostic boosting and parity learning

Adam Tauman Kalai, Yishay Mansour, Elad Verbin
2008 Proceedings of the fourtieth annual ACM symposium on Theory of computing - STOC 08  
Our agnostic boosting framework is completely general and may be applied to other agnostic learning problems.  ...  Hence, it also sheds light on the actual difficulty of agnostic learning by showing that full agnostic boosting is indeed possible.  ...  We are very grateful to the anonymous referees for the accurate and pointed comments.  ... 
doi:10.1145/1374376.1374466 dblp:conf/stoc/KalaiMV08 fatcat:kcgl7xxjmfa3tdwae6abzr4kgu

Distribution-Specific Agnostic Boosting [article]

Vitaly Feldman
2009 arXiv   pre-print
When applied to the weak agnostic parity learning algorithm of Goldreich and Levin (1989) our algorithm yields a simple PAC learning algorithm for DNF and an agnostic learning algorithm for decision trees  ...  We consider the problem of boosting the accuracy of weak learning algorithms in the agnostic learning framework of Haussler (1992) and Kearns et al. (1992).  ...  An agnostic learning algorithm for a parity function over the uniform distribution and using membership queries was given by Goldreich and Levin [13] (see also [29] ).  ... 
arXiv:0909.2927v1 fatcat:4ckz5ryasngmzmu4bxsbgietam

On Agnostic Learning of Parities, Monomials, and Halfspaces

Vitaly Feldman, Parikshit Gopalan, Subhash Khot, Ashok Kumar Ponnuswami
2009 SIAM journal on computing (Print)  
Together with the parity learning algorithm of Blum et al. [BKW03] , this gives the first nontrivial algorithm for agnostic learning of parities.  ...  We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly referred to as the noisy parity problem.  ...  We would also like to thank Shaili Jain and the anonymous referees of CCC '06 and FOCS '06 for numerous helpful remarks, one of which simplified the proof of Theorem 8.  ... 
doi:10.1137/070684914 fatcat:on6brq4wyfchrl55fsdbhtevuq

New Results for Learning Noisy Parities and Halfspaces

Vitaly Feldman, Parikshit Gopalan, Subhash Khot, Ashok Ponnuswami
2006 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06)  
We address well-studied problems concerning the learnability of parities and halfspaces in the presence of classification noise.  ...  Together with the parity learning algorithm of Blum et al. [BKW03] , this gives the first nontrivial algorithm for learning parities with adversarial noise.  ...  Acknowledgments We would like to thank Avrim Blum and Leslie Valiant for useful comments and suggestions.  ... 
doi:10.1109/focs.2006.51 dblp:conf/focs/FeldmanGKP06 fatcat:wzfde2mcwrb6xfplkcgsh5knbe

Embedding Hard Learning Problems Into Gaussian Space

Adam Klivans, Pravesh Kothari, Marc Herbstritt
2014 International Workshop on Approximation Algorithms for Combinatorial Optimization  
We reduce from the problem of learning sparse parities with noise with respect to the uniform distribution on the hypercube (sparse LPN), a notoriously hard problem in theoretical computer science and  ...  the hardness of learning problems on two different domains and distributions.  ...  We begin with a general reduction from the problem of learning k-sparse parity with noise on the uniform distribution on {−1, 1} n to problem of learning any class C of functions on R n agnostically on  ... 
doi:10.4230/lipics.approx-random.2014.793 dblp:conf/approx/KlivansK14 fatcat:uogqzy45fbfjjfueqtqmd3dwc4

Representation, Approximation and Learning of Submodular Functions Using Low-rank Decision Trees [article]

Vitaly Feldman and Pravesh Kothari and Jan Vondrak
2013 arXiv   pre-print
We also prove that our PAC and agnostic learning algorithms are essentially optimal via two lower bounds: (1) an information-theoretic lower bound of 2^Ω(1/ϵ^2/3) on the complexity of learning monotone  ...  submodular functions in any reasonable model; (2) computational lower bound of n^Ω(1/ϵ^2/3) based on a reduction to learning of sparse parities with noise, widely-believed to be intractable.  ...  There exists an algorithm AEFT, that given an integer d, θ > 0 and δ ∈ (0, 1], access to value queries of any f : {0, 1} n → [−1, 1], with probability at least 1 − δ, returns a function h represented by  ... 
arXiv:1304.0730v1 fatcat:utrdrx5nrrffzpidqpho5vjzme

Agnostic Learning of Disjunctions on Symmetric Distributions [article]

Vitaly Feldman, Pravesh Kothari
2015 arXiv   pre-print
This directly gives an agnostic learning algorithm for disjunctions on symmetric distributions that runs in time n^O( (1/ϵ)).  ...  We consider the problem of approximating and learning disjunctions (or equivalently, conjunctions) on symmetric distributions over {0,1}^n.  ...  Kalai et al. [2008] and Feldman [2012] prove hardness of agnostic learning of majorities and conjunctions, respectively, based on correlation of concepts in these classes with parities.  ... 
arXiv:1405.6791v2 fatcat:fwygrqmhyremxpwedlpz5hqgmq

Distributed Learning, Communication Complexity and Privacy [article]

Maria-Florina Balcan, Avrim Blum, Shai Fine, Yishay Mansour
2012 arXiv   pre-print
recent work on agnostic learning from class-conditional queries can be used to achieve low communication in agnostic settings as well.  ...  We provide general upper and lower bounds on the amount of communication needed to learn well, showing that in addition to VC-dimension and covering number, quantities such as the teaching-dimension and  ...  Agnostic Learning Theorem 11.  ... 
arXiv:1204.3514v3 fatcat:icnluq6eojhytoby63xhdbgtxa

Efficient Learning with Arbitrary Covariate Shift [article]

Adam Kalai, Varun Kanade
2021 arXiv   pre-print
The present work gives a polynomial-time PQ-learning algorithm that uses an oracle to a "reliable" learner for C, where reliable learning (Kalai et al., 2012) is a model of learning with one-sided noise  ...  The algorithm of Goldwasser et al. (2020) requires an (agnostic) noise tolerant learner for C.  ...  This suggests that PQ-learning is easier than agnostic learning since there is no known noise-tolerant parity learning algorithm, and in fact multiple cryptography systems rely on its hardness (see e.g  ... 
arXiv:2102.07802v1 fatcat:z4dyeulfkrgphbkkynzcxcypxu

Statistical-Query Lower Bounds via Functional Gradients [article]

Surbhi Goel, Aravind Gollakota, Adam Klivans
2020 arXiv   pre-print
This also yields a best-possible reduction between two commonly studied models of learning: agnostic learning and probabilistic concepts.  ...  (ICML 2020) on the SQ dimension of functions computed by two-layer neural networks. The crucial new ingredient is the use of a nonstandard convex functional during the boosting procedure.  ...  For boolean functions, the idea to use boosting to learn majorities of a base class appeared in Jackson [Jac97] , who boosted a weak parity learning algorithm in order to learn thresholds of parities  ... 
arXiv:2006.15812v2 fatcat:6phqg7gd2vh3blff2jxbk3nctu

Preventing Fairness Gerrymandering: Auditing and Learning for Subgroup Fairness [article]

Michael Kearns, Seth Neel, Aaron Roth, Zhiwei Steven Wu
2018 arXiv   pre-print
We prove that the computational problem of auditing subgroup fairness for both equality of false positive rates and statistical parity is equivalent to the problem of weak agnostic learning, which means  ...  We implement the simpler algorithm using linear regression as a heuristic oracle, and show that we can effectively both audit and learn fair classifiers on real datasets.  ...  Acknowledgements We thank Alekh Agarwal, Richard Berk, Miro Dudík, Akshay Krishnamurthy, John Langford, Greg Ridgeway and Greg Yang for helpful discussions and suggestions.  ... 
arXiv:1711.05144v5 fatcat:yk6liiubmvbzflrbzkbaw6kjji

A Complete Characterization of Statistical Query Learning with Applications to Evolvability [article]

Vitaly Feldman
2013 arXiv   pre-print
Unlike the previously known bounds on SQ learning our characterization preserves the accuracy and the efficiency of learning.  ...  The preservation of accuracy implies that that our characterization gives the first characterization of SQ learning in the agnostic learning framework.  ...  Acknowledgements I thank Nader Bshouty, Hans Simon and Les Valiant for discussions and valuable comments on this work.  ... 
arXiv:1002.3183v3 fatcat:k7zb4bpeurg5xofb3omlhw5yxy

A complete characterization of statistical query learning with applications to evolvability

Vitaly Feldman
2012 Journal of computer and system sciences (Print)  
The preservation of accuracy implies that our characterization gives the first characterization of SQ learning in the agnostic learning framework of Haussler (1992) [23] and Kearns, Schapire and Sellie  ...  Unlike the previously known bounds on SQ learning (Blum, et al.]) our characterization preserves the accuracy and the efficiency of learning.  ...  Acknowledgments I thank Nader Bshouty, Hans Simon and Les Valiant for discussions and valuable comments on this work.  ... 
doi:10.1016/j.jcss.2011.12.024 fatcat:tlxm4yx6h5fjdcuc4epz3fqjw4

Omnipredictors [article]

Parikshit Gopalan, Adam Tauman Kalai, Omer Reingold, Vatsal Sharan, Udi Wieder
2021 arXiv   pre-print
In addition, we show how multicalibration can be viewed as a solution concept for agnostic boosting, shedding new light on past results.  ...  Different loss functions imply different learning algorithms and, at times, very different predictors.  ...  Multicalibration and agnostic boosting One of our contributions in this work is to formalize and leverage the connections between multicalibration and the literature on agnostic boosting.  ... 
arXiv:2109.05389v1 fatcat:4chefbjh5vfl5db64xyht6ut5e

Online Fairness-Aware Learning with Imbalanced Data Streams [article]

Vasileios Iosifidis, Wenbin Zhang, Eirini Ntoutsi
2021 arXiv   pre-print
In such dynamic environments, the so-called data streams, fairness-aware learning cannot be considered as a one-off requirement, but rather it should comprise a continual requirement over the stream.  ...  with a range (relative) increase [11.2\%-14.2\%] in balanced accuracy, [22.6\%-31.8\%] in gmean, [42.5\%-49.6\%] in recall, [14.3\%-25.7\%] in kappa and [89.4\%-96.6\%] in statistical parity (fairness  ...  Massaging (MS) [24] : a chunk-based model-agnostic stream fairness-aware learning approach which minimizes statistical parity on recent discriminatory outcomes.  ... 
arXiv:2108.06231v1 fatcat:svico76zvze7pjacfhwzwyftia
« Previous Showing results 1 — 15 out of 707 results