A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
A General Dimension for Approximately Learning Boolean Functions
[chapter]

2002
*
Lecture Notes in Computer Science
*

Ñ ±´ È µ ½º

doi:10.1007/3-540-36169-3_13
fatcat:376kfkw7bnbz3nlqsdlqspve4a
*Ä*ÑÑ ½º Ñ ±´ È µ*Ä*±´ È µ ÈÖÓÓ º ËÙÔÔÓ× Ñ ±´ È µ º Ì Ò Ø Ö Ü ×Ø× Ò Ò×Û Ö Ò × Ñ Ì È ÐÐ ±´ ¾´ · µµ ÖÓÙÒ Ø Ø Ö Ø ¾ º Ú Ò Ø ÙÖÖ ÒØ Ú Ö× ÓÒ ×Ô Î¸Û ¬Ö×Ø ÓÑÔÙØ Ö Ñ Ü´Î µº ß*Á*Ö Ñ Ü´Î ... ±´ È µ¸ × Ø ×Ñ ÐÐ ×Ø ÒØ Ö ¼ ×Ù Ø Ø × ¹Ð ÖÒ Ð Û Ø Ø ÑÓ×Ø ÕÙ Ö × ÙÒ Ö Èº*Á*ÒÓ ×Ù ÒØ Ö Ü ×Ø×¸Ø Ò*Ä*±´ È µ ½º ÓÖ Û ÔÖÓ ¸Ð Ø Ù× ÓÒ× Ö ×ÓÑ Ü ÑÔÐ ×º Ì ÔÖÓØÓÓÐ Õ´ µ ÓÖ ÕÙ Ú Ð Ò ÕÙ Ö × Û Ø ÝÔÓØ × × ÖÓÑ ÓÒ ÔØ Ð ...##
###
Page 5274 of Mathematical Reviews Vol. , Issue 98H
[page]

1998
*
Mathematical Reviews
*

Summary: “In

*general*,*approximating*classes of*functions*defined over high-*dimensional*input spaces by linear combinations of*a*fixed set of basis*functions*or ‘features’ is known to be hard. ... In this paper we give*a**general*result relating the error of*approximation*of*a**function*class to the covering number of its ‘convex core’. ...##
###
On the Fourier spectrum of monotone functions

1996
*
Journal of the ACM
*

It is shown that this is tight in the sense that

doi:10.1145/234533.234564
fatcat:36owbohnrvawtnk5wy5a4pmjma
*for*any subexponential time algorithm there is*a*monotone*Boolean**function**for*which this algorithm cannot*approximate*with error better than The main result ... In*learning*theory, several polynomial-time algorithms*for**learning*some classes of monotone*Boolean**functions*, such as*Boolean**functions*with O(log 2 n/log log n) relevant variables, are presented. ... We also thank Dan Boneh,*for*suggesting the notion of influence norm, Yishay Mansour,*for*suggesting the average sensitivity viewpoint, and Uriel Feige,*for*pointing out to us some references on Hamiltonian ...##
###
Categorical invariance and structural complexity in human concept learning

2009
*
Journal of Mathematical Psychology
*

Psychological Monographs:

doi:10.1016/j.jmp.2009.04.009
fatcat:srcgq34sxredzdabrw7pmrce2a
*General*and Applied, 75(13), 1-42]*Boolean*category types consisting of three binary*dimensions*and four positive examples; (2) it is, in*general*,*a*good quantitative predictor ... The categorical invariance model (CIM) characterizes the degree of structural complexity of*a**Boolean*category as*a**function*of its inherent degree of invariance and its cardinality or size. ... The truth tables in this article were*generated*by Truth Table Constructor 3.0, an Internet applet by Brian S. Borowski. ...##
###
On the Fourier spectrum of monotone functions

1995
*
Proceedings of the twenty-seventh annual ACM symposium on Theory of computing - STOC '95
*

It is shown that this is tight in the sense that

doi:10.1145/225058.225125
dblp:conf/stoc/BshoutyT95
fatcat:xmpa3c6azzd4bf6zeyr6zcruki
*for*any subexponential time algorithm there is*a*monotone*Boolean**function**for*which this algorithm cannot*approximate*with error better than The main result ... In*learning*theory, several polynomial-time algorithms*for**learning*some classes of monotone*Boolean**functions*, such as*Boolean**functions*with O(log 2 n/log log n) relevant variables, are presented. ... We also thank Dan Boneh,*for*suggesting the notion of influence norm, Yishay Mansour,*for*suggesting the average sensitivity viewpoint, and Uriel Feige,*for*pointing out to us some references on Hamiltonian ...##
###
Learning Functions: When Is Deep Better Than Shallow
[article]

2016
*
arXiv
*
pre-print

While the universal

arXiv:1603.00988v4
fatcat:o5w4pcmyhfdkfmzbyq5ly37dgu
*approximation*property holds both*for*hierarchical and shallow networks, we prove that deep (hierarchical) networks can*approximate*the class of compositional*functions*with the same ... We then define*a**general*class of scalable, shift-invariant algorithms to show*a*simple and natural set of requirements that justify deep convolutional networks. ... In fact our results seem to*generalize*properties already known*for**Boolean**functions*which are of course*a*special case of*functions*of real variables. ...##
###
On Robust Concepts and Small Neural Nets

2017
*
International Conference on Learning Representations
*

We also give

dblp:conf/iclr/0001K17
fatcat:todir5xr2fdz5ecxm5ssw4vyja
*a*polynomial time*learning*algorithm that outputs*a*small two-layer linear threshold circuit that*approximates*such*a*given*function*. ... We also show weaker*generalizations*of this to noise-stable polynomial threshold*functions*and noise-stable*boolean**functions*in*general*. ... We show an efficient analog of the universal*approximation*theorem*for*neural networks in the case of noise-sensitive halfspaces of*boolean*hypercube, and gave efficient*learning*algorithms*for*the same ...##
###
Lower Bounds for Agnostic Learning via Approximate Rank

2010
*
Computational Complexity
*

*For*the concept class of majority

*functions*, we obtain

*a*lower bound of Ω(2 n /n), which almost meets the trivial upper bound of 2 n

*for*any concept class. ... These lower bounds substantially strengthen and

*generalize*the polynomial

*approximation*lower bounds of Paturi (1992) and show that the regression-based agnostic

*learning*algorithm of Kalai et al. (2005 ... Introduction

*Approximating*

*Boolean*

*functions*by linear combinations of small sets of features is

*a*fundamental area of study in machine

*learning*. ...

##
###
Classification by polynomial surfaces

1995
*
Discrete Applied Mathematics
*

We then use these results on the VC

doi:10.1016/0166-218x(94)00008-2
fatcat:pmwegwqi7nfnjeokot6orrrktm
*dimension*to quantify the sample size required*for*valid*generalization*in Valiant's probably*approximately*correct framework (Valiant, 1984; Blumer et al., 1989). * ... We then compute the Vapnik-Chervonenkis*dimension*of the class of*functions*realized by polynomial separating surfaces of at most*a*given degree, both*for*the case of*Boolean*inputs and real inputs. ... Acknowledgements I thank Graham Brightwell*for*helpful discussions and comments and Michael Saks*for*helpful communications regarding the results of Noga Alon. ...##
###
Page 6059 of Mathematical Reviews Vol. , Issue 2002H
[page]

2002
*
Mathematical Reviews
*

One such question is whether

*a*uniform training set is available*for**learning*any*function*in*a*given*approximation*class. ... The*Boolean**functions*we consider belong to*approximation*classes, i.e.,*functions*that are*approximable*(in various norms) by*a*few Fourier basis*functions*, or irreducible characters of the domain abelian ...##
###
The Polynomial Method is Universal for Distribution-Free Correlational SQ Learning
[article]

2020
*
arXiv
*
pre-print

We consider the problem of distribution-free

arXiv:2010.11925v2
fatcat:qcxe4nk6ofh3zmvwpq6mqkfbkq
*learning**for**Boolean**function*classes in the PAC and agnostic models. ... or*approximate*degree of any*function*class directly imply CSQ lower bounds*for*PAC or agnostic*learning*respectively. ... SQ vs CSQ While we do not prove lower bounds in the*general*SQ model, it seems unlikely that SQ*learning*is more powerful than CSQ*learning**for*PAC*learning**Boolean**function*classes [BF02] . ...##
###
A review of combinatorial problems arising in feedforward neural network design

1994
*
Discrete Applied Mathematics
*

Valiant's

doi:10.1016/0166-218x(92)00184-n
fatcat:vktvu5xdhfglpld3rrkw72ez3m
*learning*from examples model which formalizes the problem of*generalization*is presented and open questions are mentioned. ... Exact and heuristic algorithms*for*designing networks with single or multiple layers are discussed and complexity results related to the*learning*problems are reviewed. ... The concept of LTB*functions*can be extended to*general*threshold*Boolean**functions*(TB*functions*) by considering*general*(m -1)-*dimensional*discriminant manifolds instead of hyperplanes. ...##
###
Page 5478 of Mathematical Reviews Vol. , Issue 2003g
[page]

2003
*
Mathematical Reviews
*

gained in

*learning*the previous*dimensions*. ... We show that, given an alternative representation of*a**Boolean**function*f, say as*a*read-once branching program, one can find*a*decision tree T which*approximates*f to any desired amount of accuracy. ...##
###
Agnostic Learning from Tolerant Natural Proofs

2017
*
International Workshop on Approximation Algorithms for Combinatorial Optimization
*

*function*even with exp(−Ω(n)) advantage over random guessing) would yield

*a*polynomial-time query agnostic

*learning*algorithm

*for*C with the

*approximation*error O(opt). ... Our algorithm runs in randomized quasi-polynomial time, uses membership queries, and outputs

*a*circuit

*for*

*a*given

*boolean*

*function*f : {0, 1} n → {0, 1} that agrees with f on all but at most (poly log ... Such

*generators*exist

*for*m ≤ n( + 1);

*for*example, pick

*a*random 0/1 matrix

*A*of

*dimension*n × and

*a*random 0/1 vector v of

*dimension*n. Let z = (

*A*, v). ...

##
###
Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review

2017
*
International Journal of Automation and Computing
*

*A*class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason

*for*their exponential advantage. ... The paper reviews and extends an emerging body of theoretical results on deep

*learning*including the conditions under which it can be exponentially better than shallow

*learning*. ... Shamir

*for*useful emails that prompted us to clarify our results in the context of lower bounds. ...

« Previous

*Showing results 1 — 15 out of 23,800 results*