Tight Bounds on ℓ_1 Approximation and Learning of Self-Bounding Functions [article]

Vitaly Feldman, Pravesh Kothari, Jan Vondrák
2019 arXiv   pre-print
We study the complexity of learning and approximation of self-bounding functions over the uniform distribution on the Boolean hypercube 0,1^n. Informally, a function f:0,1^n →R is self-bounding if for every x ∈0,1^n, f(x) upper bounds the sum of all the n marginal decreases in the value of the function at x. Self-bounding functions include such well-known classes of functions as submodular and fractionally-subadditive (XOS) functions. They were introduced by Boucheron et al. (2000) in the
more » ... t of concentration of measure inequalities. Our main result is a nearly tight ℓ_1-approximation of self-bounding functions by low-degree juntas. Specifically, all self-bounding functions can be ϵ-approximated in ℓ_1 by a polynomial of degree Õ(1/ϵ) over 2^Õ(1/ϵ) variables. We show that both the degree and junta-size are optimal up to logarithmic terms. Previous techniques considered stronger ℓ_2 approximation and proved nearly tight bounds of Θ(1/ϵ^2) on the degree and 2^Θ(1/ϵ^2) on the number of variables. Our bounds rely on the analysis of noise stability of self-bounding functions together with a stronger connection between noise stability and ℓ_1 approximation by low-degree polynomials. This technique can also be used to get tighter bounds on ℓ_1 approximation by low-degree polynomials and faster learning algorithm for halfspaces. These results lead to improved and in several cases almost tight bounds for PAC and agnostic learning of self-bounding functions relative to the uniform distribution. In particular, assuming hardness of learning juntas, we show that PAC and agnostic learning of self-bounding functions have complexity of n^Θ̃(1/ϵ).
arXiv:1404.4702v3 fatcat:wnmchrdiubeujertmzg3zz6u5m