Filters








60 Hits in 3.9 sec

PAC Learning, VC Dimension, and the Arithmetic Hierarchy [article]

Wesley Calvert
2014 arXiv   pre-print
This family of concept classes is sufficient to cover all standard examples, and also has the property that PAC learnability is equivalent to finite VC dimension.  ...  We compute that the index set of PAC-learnable concept classes is m-complete Σ^0_3 within the set of indices for all concept classes of a reasonable form.  ...  Then C is PAC learnable if and only if C has finite VC dimension.  ... 
arXiv:1406.1111v1 fatcat:6ate4oddpndfpgz6w25l6fj3p4

PAC learning, VC dimension, and the arithmetic hierarchy

Wesley Calvert
2015 Archive for Mathematical Logic  
This family of concept classes is sufficient to cover all standard examples, and also has the property that PAC learnability is equivalent to finite VC dimension.  ...  We compute that the index set of PAC-learnable concept classes is m-complete Σ 0 3 within the set of indices for all concept classes of a reasonable form.  ...  Then C is PAC learnable if and only if C has finite VC dimension.  ... 
doi:10.1007/s00153-015-0445-8 fatcat:o7wbfycwkfh4ddysjgnrdot47a

On characterizations of learnability with computable learners [article]

Tom F. Sterkenburg
2022 arXiv   pre-print
We study computable PAC (CPAC) learning as introduced by Agarwal et al. (2020). First, we consider the main open question of finding characterizations of proper and improper CPAC learning.  ...  We give a simple general argument to exhibit such ndecidability, and initiate a study of the arithmetical complexity of learnability.  ...  A hypothesis class H is PAC learnable if and only if ERM H PAC learns H if and only if VCdim(H) < ∞. CPAC learnable if it has finite VC dimension and ERM is computably implementable.  ... 
arXiv:2202.05041v2 fatcat:anx2uwnfwrebllsxnwx6xyygum

Learning Figures with the Hausdorff Metric by Fractals [chapter]

Mahito Sugiyama, Eiju Hirowatari, Hideki Tsuiki, Akihiro Yamamoto
2010 Lecture Notes in Computer Science  
Goal and Approach • Constructing a computational learning model for analog data with discretization . Gold-style learning model as a base model .  ...  Fractals to represent (and compute) continuous objects / Goal and Approach • Constructing a computational learning model for analog data with discretization .  ...  -In the Valiant-style (PAC) learning model, the sample size is characterized by the VC dimension I: The class of open intervals in the real line ℝ dim VC I = 2 H: The class of half spaces in 2-dimensional  ... 
doi:10.1007/978-3-642-16108-7_26 fatcat:aye7jdpv5zcp3ahl3dx4n4txru

Languages as hyperplanes: grammatical inference with string kernels

Alexander Clark, Christophe Costa Florêncio, Chris Watkins
2010 Machine Learning  
We demonstrate that some mildly context-sensitive languages can be represented in this way and that it is possible to efficiently learn these using kernel PCA.  ...  Using string kernels, languages can be represented as hyperplanes in a high dimensional feature space.  ...  Acknowledgements This work has been partially supported by the EU funded PASCAL Network of Excellence on Pattern Analysis, Statistical Modelling and Computational Learning.  ... 
doi:10.1007/s10994-010-5218-3 fatcat:d6ntfwt4prdofi23n5xplxapyu

Languages as Hyperplanes: Grammatical Inference with String Kernels [chapter]

Alexander Clark, Christophe Costa Florêncio, Chris Watkins
2006 Lecture Notes in Computer Science  
We demonstrate that some mildly context-sensitive languages can be represented in this way and that it is possible to efficiently learn these using kernel PCA.  ...  Using string kernels, languages can be represented as hyperplanes in a high dimensional feature space.  ...  Acknowledgements This work has been partially supported by the EU funded PASCAL Network of Excellence on Pattern Analysis, Statistical Modelling and Computational Learning.  ... 
doi:10.1007/11871842_13 fatcat:h5r6asj7wjg4lkjdksyrw4ekvm

Editors' Introduction [chapter]

Marcus Hutter, Rocco A. Servedio, Eiji Takimoto
2007 Lecture Notes in Computer Science  
The research reported here ranges over areas such as unsupervised learning, inductive inference, complexity and learning, boosting and reinforcement learning, query learning models, grammatical inference  ...  Philosophers have pondered the phenomenon of learning for millennia; scientists and psychologists have studied learning for more than a century.  ...  Strengthening previous results of Goldberg and Jerrum, Alonso and Montaña give upper bounds on the VC dimension of concept classes in which the membership test for whether an input belongs to a concept  ... 
doi:10.1007/978-3-540-75225-7_1 fatcat:jjzpiql74jac3cvxixuyav4foy

An Axiomatic Theory of Provably-Fair Welfare-Centric Machine Learning [article]

Cyrus Cousins
2021 arXiv   pre-print
We address an inherent difficulty in welfare-theoretic fair machine learning by proposing an equivalently axiomatically-justified alternative and studying the resulting computational and statistical learning  ...  Building upon these concepts, we define fair-PAC (FPAC) learning, where an FPAC learner is an algorithm that learns an ε-δ malfare-optimal model with bounded sample complexity, for any data distribution  ...  Both PAC and FPAC learning are parameterized by a learning task (model space and loss function), and we explore the rich learnability-hierarchy under variations of these concepts.  ... 
arXiv:2104.14504v2 fatcat:k562rqjc3vdf5ilfvv5yddoauq

Implicit Regularization in Deep Learning [article]

Behnam Neyshabur
2017 arXiv   pre-print
We show that implicit regularization induced by the optimization method is playing a key role in generalization and success of deep learning models.  ...  We further study the invariances in neural networks, suggest complexity measures and optimization algorithms that have similar invariances to those in neural networks and evaluate them on a number of learning  ...  Introduction Deep learning refers to training typically complex and highly over-parameterized models that benefit from learning a hierarchy of representations.  ... 
arXiv:1709.01953v2 fatcat:o3xzvsq2dfaoxceks5bsx6lcs4

Why Philosophers Should Care about Computational Complexity [chapter]

2013 Computability  
the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction and Goodman's grue riddle, the foundations of quantum mechanics  ...  In particular, I argue that computational complexity theory-the field that studies the resources (such as time, space, and randomness) needed to solve computational problems-leads to new perspectives on  ...  Section 5; and to Andy Drucker, Michael Forbes, Dana Moshkovitz, Ronald de Wolf, and Avi Wigderson for their feedback.  ... 
doi:10.7551/mitpress/8009.003.0011 fatcat:sk6y4k62xbfgdn4nytamdeq6my

Why Philosophers Should Care About Computational Complexity [article]

Scott Aaronson
2011 arXiv   pre-print
In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives  ...  , economic rationality, closed timelike curves, and several other topics of philosophical interest.  ...  ; and to David  ... 
arXiv:1108.1791v3 fatcat:eqvibzsxmjczvgmnbtlv5amyxe

Explaining generalization in deep learning: progress and fundamental limits [article]

Vaishnavh Nagarajan
2021 arXiv   pre-print
Uniform convergence has in fact been the most widely used tool in deep learning literature, thanks to its simplicity and generality.  ...  This dissertation studies a fundamental open challenge in deep learning theory: why do deep networks generalize well even while being overparameterized, unregularized and fitting the training data to zero  ...  (Sketch) In the case of VC-dimension (which is also based on uniform convergence), this follows from the fact that the VC-dimension of F is as large as the parameter count.  ... 
arXiv:2110.08922v1 fatcat:blzj6rhlffgrjpeaj2ia57g6gq

Definable relations and first-order query languages over strings

Michael Benedikt, Leonid Libkin, Thomas Schwentick, Luc Segoufin
2003 Journal of the ACM  
We identify a subset of these models which have additional attractive properties, such as finite VC dimension and quantifier elimination.  ...  Although the one-dimensional sets in these algebras are still the regular sets, the algebra as a whole shares many of the attractive properties of the star-free languages.  ...  Acknowledgments We thank Wolfgang Thomas, Scott Weinstein, Emmanuel Waller, and Jan Van den Bussche for fruitful discussions on the subject, and the anonymous referees for numerous helpful comments.  ... 
doi:10.1145/876638.876642 fatcat:dnyytdskv5holgdahoekdxucki

Neural Networks Regularization Through Representation Learning [article]

Soufiane Belharbi
2018 arXiv   pre-print
Neural network models and deep models are one of the leading and state of the art models in machine learning.  ...  Many approaches have been proposed to prevent the network from overfitting and improve its generalization performance such as data augmentation, early stopping, parameters sharing, unsupervised learning  ...  Acknowledgement This work has been partly supported by the grant ANR-11-JS02-010 LeMon and the grant ANR-16-CE23-0006 "Deep in France".  ... 
arXiv:1807.05292v1 fatcat:qwqvdyzkf5alrjtp6fnclxpuqu

Neural Computing (Dagstuhl Seminar 9445)

Wolfgang Maass, Christoph von der Malsburg, Eduardo Sontag, Ingo Wegener
1995
In addition we give corresponding results for networks of spiking neurons with a limited timing precision, and we prove upper and lower bounds for the VC-dimension and pseudo-dimension of networks of spiking  ...  The construction is as follows: start from a class F of functions of finite VC dimension, take the convex hull coF of F , and then take the closure coF of coF in an appropriate sense.  ... 
doi:10.4230/dagsemrep.103 fatcat:imnnmjqn65bbnm4udjs3gcgzdq
« Previous Showing results 1 — 15 out of 60 results