Filters








23,800 Hits in 8.9 sec

A General Dimension for Approximately Learning Boolean Functions [chapter]

Johannes Köbler, Wolfgang Lindner
2002 Lecture Notes in Computer Science  
Ñ ±´ È µ ½º Ä ÑÑ ½º Ñ ±´ È µ Ä ±´ È µ ÈÖÓÓ º ËÙÔÔÓ× Ñ ±´ È µ º Ì Ò Ø Ö Ü ×Ø× Ò Ò×Û Ö Ò × Ñ Ì È ÐÐ ±´ ¾´ · µµ ÖÓÙÒ Ø Ø Ö Ø ¾ º Ú Ò Ø ÙÖÖ ÒØ Ú Ö× ÓÒ ×Ô Î¸Û ¬Ö×Ø ÓÑÔÙØ Ö Ñ Ü´Î µº ß Á Ö Ñ Ü´Î  ...  ±´ È µ¸ × Ø ×Ñ ÐÐ ×Ø ÒØ Ö ¼ ×Ù Ø Ø × ¹Ð ÖÒ Ð Û Ø Ø ÑÓ×Ø ÕÙ Ö × ÙÒ Ö Èº Á ÒÓ ×Ù ÒØ Ö Ü ×Ø×¸Ø Ò Ä ±´ È µ ½º ÓÖ Û ÔÖÓ ¸Ð Ø Ù× ÓÒ× Ö ×ÓÑ Ü ÑÔР׺ Ì ÔÖÓØÓÓÐ Õ´ µ ÓÖ ÕÙ Ú Ð Ò ÕÙ Ö × Û Ø ÝÔÓØ × × ÖÓÑ ÓÒ ÔØ Ð  ... 
doi:10.1007/3-540-36169-3_13 fatcat:376kfkw7bnbz3nlqsdlqspve4a

Page 5274 of Mathematical Reviews Vol. , Issue 98H [page]

1998 Mathematical Reviews  
Summary: “In general, approximating classes of functions defined over high-dimensional input spaces by linear combinations of a fixed set of basis functions or ‘features’ is known to be hard.  ...  In this paper we give a general result relating the error of approximation of a function class to the covering number of its ‘convex core’.  ... 

On the Fourier spectrum of monotone functions

Nader H. Bshouty, Christino Tamon
1996 Journal of the ACM  
It is shown that this is tight in the sense that for any subexponential time algorithm there is a monotone Boolean function for which this algorithm cannot approximate with error better than The main result  ...  In learning theory, several polynomial-time algorithms for learning some classes of monotone Boolean functions, such as Boolean functions with O(log 2 n/log log n) relevant variables, are presented.  ...  We also thank Dan Boneh, for suggesting the notion of influence norm, Yishay Mansour, for suggesting the average sensitivity viewpoint, and Uriel Feige, for pointing out to us some references on Hamiltonian  ... 
doi:10.1145/234533.234564 fatcat:36owbohnrvawtnk5wy5a4pmjma

Categorical invariance and structural complexity in human concept learning

Ronaldo Vigo
2009 Journal of Mathematical Psychology  
Psychological Monographs: General and Applied, 75(13), 1-42] Boolean category types consisting of three binary dimensions and four positive examples; (2) it is, in general, a good quantitative predictor  ...  The categorical invariance model (CIM) characterizes the degree of structural complexity of a Boolean category as a function of its inherent degree of invariance and its cardinality or size.  ...  The truth tables in this article were generated by Truth Table Constructor 3.0, an Internet applet by Brian S. Borowski.  ... 
doi:10.1016/j.jmp.2009.04.009 fatcat:srcgq34sxredzdabrw7pmrce2a

On the Fourier spectrum of monotone functions

Nader H. Bshouty, Christino Tamon
1995 Proceedings of the twenty-seventh annual ACM symposium on Theory of computing - STOC '95  
It is shown that this is tight in the sense that for any subexponential time algorithm there is a monotone Boolean function for which this algorithm cannot approximate with error better than The main result  ...  In learning theory, several polynomial-time algorithms for learning some classes of monotone Boolean functions, such as Boolean functions with O(log 2 n/log log n) relevant variables, are presented.  ...  We also thank Dan Boneh, for suggesting the notion of influence norm, Yishay Mansour, for suggesting the average sensitivity viewpoint, and Uriel Feige, for pointing out to us some references on Hamiltonian  ... 
doi:10.1145/225058.225125 dblp:conf/stoc/BshoutyT95 fatcat:xmpa3c6azzd4bf6zeyr6zcruki

Learning Functions: When Is Deep Better Than Shallow [article]

Hrushikesh Mhaskar, Qianli Liao, Tomaso Poggio
2016 arXiv   pre-print
While the universal approximation property holds both for hierarchical and shallow networks, we prove that deep (hierarchical) networks can approximate the class of compositional functions with the same  ...  We then define a general class of scalable, shift-invariant algorithms to show a simple and natural set of requirements that justify deep convolutional networks.  ...  In fact our results seem to generalize properties already known for Boolean functions which are of course a special case of functions of real variables.  ... 
arXiv:1603.00988v4 fatcat:o5w4pcmyhfdkfmzbyq5ly37dgu

On Robust Concepts and Small Neural Nets

Amit Deshpande, Sushrut Karmalkar
2017 International Conference on Learning Representations  
We also give a polynomial time learning algorithm that outputs a small two-layer linear threshold circuit that approximates such a given function.  ...  We also show weaker generalizations of this to noise-stable polynomial threshold functions and noise-stable boolean functions in general.  ...  We show an efficient analog of the universal approximation theorem for neural networks in the case of noise-sensitive halfspaces of boolean hypercube, and gave efficient learning algorithms for the same  ... 
dblp:conf/iclr/0001K17 fatcat:todir5xr2fdz5ecxm5ssw4vyja

Lower Bounds for Agnostic Learning via Approximate Rank

Adam R. Klivans, Alexander A. Sherstov
2010 Computational Complexity  
For the concept class of majority functions, we obtain a lower bound of Ω(2 n /n), which almost meets the trivial upper bound of 2 n for any concept class.  ...  These lower bounds substantially strengthen and generalize the polynomial approximation lower bounds of Paturi (1992) and show that the regression-based agnostic learning algorithm of Kalai et al. (2005  ...  Introduction Approximating Boolean functions by linear combinations of small sets of features is a fundamental area of study in machine learning.  ... 
doi:10.1007/s00037-010-0296-y fatcat:u5wagdcpzvhcphuk72z646uhhe

Classification by polynomial surfaces

Martin Anthony
1995 Discrete Applied Mathematics  
We then use these results on the VC dimension to quantify the sample size required for valid generalization in Valiant's probably approximately correct framework (Valiant, 1984; Blumer et al., 1989). *  ...  We then compute the Vapnik-Chervonenkis dimension of the class of functions realized by polynomial separating surfaces of at most a given degree, both for the case of Boolean inputs and real inputs.  ...  Acknowledgements I thank Graham Brightwell for helpful discussions and comments and Michael Saks for helpful communications regarding the results of Noga Alon.  ... 
doi:10.1016/0166-218x(94)00008-2 fatcat:pmwegwqi7nfnjeokot6orrrktm

Page 6059 of Mathematical Reviews Vol. , Issue 2002H [page]

2002 Mathematical Reviews  
One such question is whether a uniform training set is available for learning any function in a given approximation class.  ...  The Boolean functions we consider belong to approximation classes, i.e., functions that are approximable (in various norms) by a few Fourier basis functions, or irreducible characters of the domain abelian  ... 

The Polynomial Method is Universal for Distribution-Free Correlational SQ Learning [article]

Aravind Gollakota, Sushrut Karmalkar, Adam Klivans
2020 arXiv   pre-print
We consider the problem of distribution-free learning for Boolean function classes in the PAC and agnostic models.  ...  or approximate degree of any function class directly imply CSQ lower bounds for PAC or agnostic learning respectively.  ...  SQ vs CSQ While we do not prove lower bounds in the general SQ model, it seems unlikely that SQ learning is more powerful than CSQ learning for PAC learning Boolean function classes [BF02] .  ... 
arXiv:2010.11925v2 fatcat:qcxe4nk6ofh3zmvwpq6mqkfbkq

A review of combinatorial problems arising in feedforward neural network design

E. Amaldi, E. Mayoraz, D. de Werra
1994 Discrete Applied Mathematics  
Valiant's learning from examples model which formalizes the problem of generalization is presented and open questions are mentioned.  ...  Exact and heuristic algorithms for designing networks with single or multiple layers are discussed and complexity results related to the learning problems are reviewed.  ...  The concept of LTB functions can be extended to general threshold Boolean functions (TB functions) by considering general (m -1)-dimensional discriminant manifolds instead of hyperplanes.  ... 
doi:10.1016/0166-218x(92)00184-n fatcat:vktvu5xdhfglpld3rrkw72ez3m

Page 5478 of Mathematical Reviews Vol. , Issue 2003g [page]

2003 Mathematical Reviews  
gained in learning the previous dimensions.  ...  We show that, given an alternative representation of a Boolean function f, say as a read-once branching program, one can find a decision tree T which approximates f to any desired amount of accuracy.  ... 

Agnostic Learning from Tolerant Natural Proofs

Marco L. Carmosino, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, Marc Herbstritt
2017 International Workshop on Approximation Algorithms for Combinatorial Optimization  
function even with exp(−Ω(n)) advantage over random guessing) would yield a polynomial-time query agnostic learning algorithm for C with the approximation error O(opt).  ...  Our algorithm runs in randomized quasi-polynomial time, uses membership queries, and outputs a circuit for a given boolean function f : {0, 1} n → {0, 1} that agrees with f on all but at most (poly log  ...  Such generators exist for m ≤ n( + 1); for example, pick a random 0/1 matrix A of dimension n × and a random 0/1 vector v of dimension n. Let z = (A, v).  ... 
doi:10.4230/lipics.approx-random.2017.35 dblp:conf/approx/CarmosinoIKK17 fatcat:vewhdm7sjzhsjmn2ljblst2ezi

Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review

Tomaso Poggio, Hrushikesh Mhaskar, Lorenzo Rosasco, Brando Miranda, Qianli Liao
2017 International Journal of Automation and Computing  
A class of deep convolutional networks represent an important special case of these conditions, though weight sharing is not the main reason for their exponential advantage.  ...  The paper reviews and extends an emerging body of theoretical results on deep learning including the conditions under which it can be exponentially better than shallow learning.  ...  Shamir for useful emails that prompted us to clarify our results in the context of lower bounds.  ... 
doi:10.1007/s11633-017-1054-2 fatcat:mprujdvhgjh4rcrj3npx7ris4y
« Previous Showing results 1 — 15 out of 23,800 results