A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is `application/pdf`

.

## Filters

##
###
PAC learnability of a concept class under non-atomic measures: a problem by Vidyasagar
[article]

2010
*
arXiv
*
pre-print

*Vidyasagar*, we state

*a*necessary and sufficient condition for distribution-free

*PAC*

*learnability*

*of*

*a*

*concept*

*class*C

*under*the family

*of*all

*non*-

*atomic*(diffuse)

*measures*on the domain Ω. ... Besides,

*learnability*

*of*C

*under*

*non*-

*atomic*

*measures*does not imply the uniform Glivenko-Cantelli property with regard to

*non*-

*atomic*

*measures*. ... are

*PAC*

*learnable*

*under*the family P na (Ω)

*of*all

*non*-

*atomic*probability

*measures*on Ω. ...

##
###
PAC learnability under non-atomic measures: a problem by Vidyasagar
[article]

2012
*
arXiv
*
pre-print

In response to

arXiv:1105.5669v3
fatcat:hzakh5nqqvhlrcl5rbbmvtvdwm
*a*1997*problem**of*M.*Vidyasagar*, we state*a*criterion for*PAC**learnability**of**a**concept**class*C*under*the family*of*all*non*-*atomic*(diffuse)*measures*on the domain Ω. ... sufficient but not necessary for*PAC**learnability**under**non*-*atomic**measures*. ...*Of*course the remaining imperfections are all author's own. ...##
###
A Note on Sample Complexity of Learning Binary Output Neural Networks under Fixed Input Distributions

2010
*
2010 Eleventh Brazilian Symposium on Neural Networks
*

We show that the learning sample complexity

doi:10.1109/sbrn.2010.10
dblp:conf/sbrn/Pestov10
fatcat:tuq5na7rendbdl5mdvt2qua5xu
*of**a*sigmoidal neural network constructed*by*Sontag (1992) required to achieve*a*given misclassification error*under**a*fixed purely*atomic*distribution can grow ... The rate can be superexponential,*a**non*-recursive function, etc. We further observe that Sontag's ANN is not Glivenko-Cantelli*under*any input distribution having*a**non*-*atomic*part. ... Does there exist*a**non*-*atomic*probability*measure*on R*under*which the Sontag ANN is*PAC**learnable*?*Problem*3. ...##
###
Guest Editors' foreword

2013
*
Theoretical Computer Science
*

The topic

doi:10.1016/j.tcs.2012.10.007
fatcat:ciit7zasgzgytfxx66tzb5onku
*of*Pestov's paper is already explained*by*its title,*PAC**learnability**under**non*-*atomic**measures*:*A**problem**by**Vidyasagar*. ... In 1997*Vidyasagar*posed the*problem**of*characterizing*learnability**under**non*-*atomic*distributions, where*a*distribution D is said to be*non*-*atomic*if every set*A*with D(*A*) > 0 has*a*subset B with 0 < ...##
###
Editors' Introduction
[chapter]

2013
*
Lecture Notes in Computer Science
*

He also studied the

doi:10.1007/978-3-642-40935-6_1
fatcat:pchrsvhjezfbvh6dfplqhxhgcy
*learnability**of*regular languages and context-free languages;*a*sample result, obtained in collaboration with Franck Thollard, is that the*class**of*regular languages can be*PAC*-learned ... using*a*polynomial amount*of*data and processing time, provided that the distributions*of*the samples are restricted to be generated*by*one*of**a*large family*of*related probabilistic deterministic finite ... The purpose*of*Vladimir Pestov's work is explained already in its title,*PAC**Learnability**of**a**Concept**Class**under**Non*-*Atomic**Measures*:*A**Problem**by**Vidyasagar*. ...##
###
Predictive PAC Learning and Process Decompositions
[article]

2013
*
arXiv
*
pre-print

We informally call

arXiv:1309.4859v1
fatcat:xurmzcu7cbhk7faopcedo4c5wi
*a*stochastic process*learnable*if it admits*a*generalization error approaching zero in probability for any*concept**class*with finite VC-dimension (IID processes are the simplest example ... In particular, we give*a*novel*PAC*generalization bound for mixtures*of**learnable*processes with*a*generalization error that is not worse than that*of*each mixture component. ... The latter is an immediate consequence*of*the VC-dimension characterization*of**PAC**learnability*: Theorem 1 Suppose that the*concept**class*H is*PAC**learnable*from IID samples. ...##
###
An Approach to One-Bit Compressed Sensing Based on Probably Approximately Correct Learning Theory
[article]

2017
*
arXiv
*
pre-print

In this paper, the

arXiv:1710.07973v1
fatcat:dt6pv4rijfg3hobxia4fjukms4
*problem**of*one-bit compressed sensing (OBCS) is formulated as*a**problem*in probably approximately correct (*PAC*) learning. ...*By*coupling this estimate with well-established results in*PAC*learning theory, we show that*a*consistent algorithm can recover*a*k-sparse vector with O(k (n/k))*measurements*, given only the signs*of*the ... The*concept**class*C is said to be*PAC**learnable**under*the family*of*probability*measures*P is there exists*a**PAC*algorithm. ...##
###
Policy Transforms and Learning Optimal Policies
[article]

2020
*
arXiv
*
pre-print

We characterize

arXiv:2012.11046v1
fatcat:z2awheu3gnewdaocb24w2jf544
*learnability**of**a*set*of*policy options*by*the existence*of**a*decision rule that closely approximates the maximin optimal value*of*the policy transform with high probability. ... We study the*problem**of*choosing optimal policy rules in uncertain environments using models that may be incomplete and/or partially identified. ... Definition*A*.4 (Agnostic*PAC**Learnability*). ...##
###
Predictive PAC Learning and Process Decompositions

*
Advances in Neural Information Processing Systems
*

We informally call

pmid:26321855
pmcid:PMC4551412
fatcat:giynhtyuufbnhmlayzbsrhbani
*a*stochastic process*learnable*if it admits*a*generalization error approaching zero in probability for any*concept**class*with finite VC-dimension (IID processes are the simplest example ... In particular, we give*a*novel*PAC*generalization bound for mixtures*of**learnable*processes with*a*generalization error that is not worse than that*of*each mixture component. ... The latter is an immediate consequence*of*the VC-dimension characterization*of**PAC**learnability*: Theorem 1 Suppose that the*concept**class*H is*PAC**learnable*from IID samples. ...##
###
Learning probabilistic automata with SMT solving

2021

Our algorithm then encodes the learning

doi:10.18154/rwth-2021-03668
fatcat:7xcczh5mufamvh3wizu4m3bpiy
*problem*in the existential theory*of*the reals, to ensure the result to be*a*probabilistic automaton itself. ... We present an active learning algorithm capable*of*finding*a*probabilistic automaton computing the distribution yielded*by**a*stochastic system, given it can be modelled as such. ... Learning Weighted Finite Automata In this section we will explore the*learnability**of*weighted finite automata, the more general*class**of*automata. ...