Low-end uniform hardness vs. randomness tradeoffs for AM

Ronen Shaltiel, Christopher Umans
2007 Proceedings of the thirty-ninth annual ACM symposium on Theory of computing - STOC '07  
In 1998, Impagliazzo and Wigderson [IW98] proved a hardness vs. randomness tradeoff for BPP in the uniform setting, which was subsequently extended to give optimal tradeoffs for the full range of possible hardness assumptions by Trevisan and Vadhan [TV02] (in a slightly weaker setting). In 2003, Gutfreund, Shaltiel and Ta-Shma [GSTS03] proved a uniform hardness vs. randomness tradeoff for AM, but that result only worked on the "high-end" of possible hardness assumptions. In this work, we give
more » ... iform hardness vs. randomness tradeoffs for AM that are near-optimal for the full range of possible hardness assumptions. Following [GSTS03], we do this by constructing a hittingset-generator (HSG) for AM with "resilient reconstruction." Our construction is a recursive variant of the Miltersen-Vinodchandran HSG [MV05], the only known HSG construction with this required property. The main new idea is to have the reconstruction procedure operate implicitly and locally on superpolynomially large objects, using tools from PCPs (low-degree testing, self-correction) together with a novel use of extractors that are built from Reed-Muller codes [TSZS06, SU05b] for a sort of locally-computable error-reduction. As a consequence we obtain gap theorems for AM (and AM ∩ coAM) that state, roughly, that either AM (or AM ∩ coAM) protocols running in time t(n) can simulate all of EXP ("Arthur-Merlin games are powerful"), or else all of AM (or AM ∩ coAM) can be simulated in nondeterministic time s(n) ("Arthur-Merlin games can be derandomized"), for a near-optimal relationship between t(n) and s(n). As in [GSTS03], the case of AM ∩ coAM yields a particularly clean theorem that is of special interest due to the wide array of cryptographic and other problems that lie in this class. simulated by a nondeterministic machine with small slowdown? Is a polynomial slowdown possible -i.e., does AM = NP? We refer to efforts to answer the first set of questions positively as "derandomizing BPP" and efforts to answer the second set of questions positively as "derandomizing AM". Recent work [IKW02, KI04] has shown that derandomizing BPP or AM entails proving certain circuit lower bounds that currently seem well beyond our reach. The hardness versus randomness paradigm An influential line of research initiated by [BM84, Yao82, NW94] tries to achieve derandomization under the assumption that certain hard functions exist, thus circumventing the need for proving circuit lower bounds. More precisely, we will work with hardness assumptions concerning the circuit complexity of functions computable in exponential time 1 . Derandomizing BPP can be done with lower bounds against size s( ) deterministic circuits while derandomizing AM typically requires lower bounds against size s( ) nondeterministic circuits, where is the input length of the hard function. Naturally, stronger assumptions -higher values of s( ) -give stronger conclusions, i.e., more efficient derandomization. There are two extremes of this range of tradeoffs: In the "high end" of hardness assumptions one assumes hardness against circuits of very large size s( ) = 2 Ω( ) and can obtain "full derandomization," i.e., BPP = P [IW97] or AM = NP [MV05] . While in the "low-end" one assumes hardness against smaller circuits of size s( ) = poly( ) and can conclude "weak derandomization," i.e., simulations of BPP (resp. AM) that run in subexponential deterministic (resp. nondeterministic subexponential) time [BFNW93, SU05b] . Today, after a long line of research [NW94, BFNW93, Imp95, IW97, AK01, KvM02, MV05, ISW06, SU05b, Uma03, SU05a, Uma05] we have optimal hardness versus randomness tradeoffs for both BPP and AM that achieve "optimal parameters" in the non-uniform setting (see the discussion of non-uniform vs. uniform below). Pseudorandom generators and hitting set generators The known hardness versus randomness tradeoffs are all achieved by constructing a pseudorandom generator (PRG). This is a deterministic function G which on input m, produces a small set of T m-bit strings in time poly(T ), with the property that a randomly chosen string from this set cannot be efficiently distinguished from a uniformly chosen m-bit string 2 . In this paper we are interested in a weaker variant of a pseudorandom generator called a hitting set generator (HSG). A function G is a HSG against a family of circuits on m variables, if any circuit in the family which accepts at least 1/3 of its inputs also accepts one of the m-bit output strings of G (when run with input m). It is standard that given a HSG against deterministic (resp. co-nondeterministic) circuits of size poly(m) one can derandomize RP (resp. AM) in time poly(T ) by simulating the algorithm (resp. protocol) on all strings output by the HSG, and accepting if at least one of the runs accepts 3 . The proofs of the aforementioned hardness versus randomness tradeoffs are all composed of two parts: first, they give an efficient way to generate a set of strings (the output of the PRG or HSG) when given access to some function f . Second, they give a reduction showing that if the intended derandomization using this set of strings fails, then the function f can be computed by a small circuit, which then contradicts the initial 1 This type of assumption was introduced by [NW94] whereas the initial papers [BM84, Yao82] relied on cryptographic assumptions. In this paper we are interested in derandomizing AM which cannot be achieved by the "cryptographic" line of hardness versus randomness tradeoffs. 2 An alternative formulation is to think of G as a function that takes a t = log T bit "seed" as input and outputs the element in T indexed by the seed. 3 By [ACR96, ACRT99], HSGs for deterministic circuits also suffice to derandomize two sided error BPP.
doi:10.1145/1250790.1250854 dblp:conf/stoc/ShaltielU07 fatcat:rubsqhu67najncnqaetyywxj44