Optimal Cryptographic Hardness of Learning Monotone Functions [chapter]

Dana Dachman-Soled, Homin K. Lee, Tal Malkin, Rocco A. Servedio, Andrew Wan, Hoeteck Wee
2008 Lecture Notes in Computer Science  
Over the years a range of positive algorithmic results have been obtained for learning various classes of monotone Boolean functions from uniformly distributed random examples. To date, the only negative result for learning monotone functions in this model is an information-theoretic lower bound showing that certain superpolynomial-size monotone circuits cannot be learned to accuracy 1/2 + ω(log n)/ √ n (Blum et al., FOCS '98). This is in contrast with the situation for non-monotone functions,
more » ... here a wide range of cryptographic hardness results establish that various "simple" classes of polynomial-size circuits are not learnable by polynomial-time algorithms. In this paper we establish cryptographic hardness results for learning various "simple" classes of monotone circuits, thus giving a computational analogue of the information-theoretic hardness results of Blum et al. mentioned above. Lower bounds of the type we establish have previously only been known for non-monotone functions. Some of our results show cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than 1/2 + 1/ √ n; this accuracy bound is close to optimal by known positive results (Blum et al., FOCS '98). Other results show that under a plausible cryptographic hardness assumption, a class of constantdepth, sub-polynomial-size circuits computing monotone functions is hard to learn; this result is close to optimal in terms of the circuit size parameter by known positive results as well (Servedio, Information and Computation '04). Our main tool is a complexity-theoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O'Donnell (JCSS '04).
doi:10.1007/978-3-540-70575-8_4 fatcat:v6zgngwvpzb27jhxalzfytr6d4