Pseudo-Derandomizing Learning and Approximation

Igor Carboni Oliveira, Rahul Santhanam, Michael Wagner
2018 International Workshop on Approximation Algorithms for Combinatorial Optimization  
We continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser [7]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed output with high probability. We explore pseudo-determinism in the settings of learning and approximation. Our goal is to simulate known randomized algorithms in these settings by pseudo-deterministic algorithms in a generic fashion -a goal we succinctly term pseudo-derandomization. Learning. In the setting of learning
more » ... with membership queries, we first show that randomized learning algorithms can be derandomized (resp. pseudo-derandomized) under the standard hardness assumption that E (resp. BPE) requires large Boolean circuits. Thus, despite the fact that learning is an algorithmic task that requires interaction with an oracle, standard hardness assumptions suffice to (pseudo-)derandomize it. We also unconditionally pseudo-derandomize any quasi-polynomial time learning algorithm for polynomial size circuits on infinitely many input lengths in sub-exponential time. Next, we establish a generic connection between learning and derandomization in the reverse direction, by showing that deterministic (resp. pseudo-deterministic) learning algorithms for a concept class C imply hitting sets against C that are computable deterministically (resp. pseudodeterministically). In particular, this suggests a new approach to constructing hitting set generators against AC 0 [p] circuits by giving a deterministic learning algorithm for AC 0 [p]. Approximation. Turning to approximation, we unconditionally pseudo-derandomize any polytime randomized approximation scheme for integer-valued functions infinitely often in subexponential time over any samplable distribution on inputs. As a corollary, we get that the (0, 1)-Permanent has a fully pseudo-deterministic approximation scheme running in sub-exponential time infinitely often over any samplable distribution on inputs. Finally, we investigate the notion of approximate canonization of Boolean circuits. We use a connection between pseudodeterministic learning and approximate canonization to show that if BPE does not have sub-exponential size circuits infinitely often, then there is a pseudodeterministic approximate canonizer for AC 0 [p] computable in quasi-polynomial time.
doi:10.4230/lipics.approx-random.2018.55 dblp:conf/approx/OliveiraS18 fatcat:gjphuxvvubakxevkekyacqxryq