A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Extractor-Based Time-Space Lower Bounds for Learning
[article]
2017
arXiv
pre-print
A matrix M: A × X →{-1,1} corresponds to the following learning problem: An unknown element x ∈ X is chosen uniformly at random. A learner tries to learn x from a stream of samples, (a_1, b_1), (a_2, b_2) ..., where for every i, a_i ∈ A is chosen uniformly at random and b_i = M(a_i,x). Assume that k,ℓ, r are such that any submatrix of M of at least 2^-k· |A| rows and at least 2^-ℓ· |X| columns, has a bias of at most 2^-r. We show that any learning algorithm for the learning problem
arXiv:1708.02639v1
fatcat:uaecyps4cvcgjddesycf5dtava