Uniform hardness versus randomness tradeoffs for Arthur-Merlin games

Dan Gutfreund, Ronen Shaltiel, Amnon Ta-Shma
<span title="">2003</span> <i title="Springer Nature"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/t46w5lpc3ngnfmzil33zdjjrp4" style="color: black;">Computational Complexity</a> </i> &nbsp;
Impagliazzo and Wigderson proved a uniform hardness vs. randomness "gap theorem" for BPP. We show an analogous result for AM: Either Arthur-Merlin protocols are very strong and everything in E = DTIME(2 O(n) ) can be proved to a sub-exponential time verifier, or else Arthur-Merlin protocols are weak and every language in AM has a polynomial time nondeterministic algorithm such that it is infeasible to come up with inputs on which the algorithm fails. We also show that if Arthur-Merlin protocols
more &raquo; ... are not very strong (in the sense explained above) then AM ∩ coAM = NP ∩ coNP. Our technique combines the nonuniform hardness versus randomness tradeoff of Miltersen and Vinodchandran with "instance checking". A key ingredient in our proof is identifying a novel "resiliency" property of hardness vs. randomness tradeoffs. One of the most basic goals of computational complexity is understanding the relative power of probabilistic complexity classes such as BPP, MA and AM. In particular, a long line of research is aimed at showing that randomness does not add substantial computational power. Much research is aimed at achieving this by using the mildest possible unproven assumptions. Nonuniform hardness versus randomness tradeoffs One very fruitful sequence of results uses the "Hardness versus Randomness" paradigm, first suggested by Blum and Micali, and Yao [BM84, Yao82] . The approach is to show that one can take a function that is computable in exponential time and hard for small circuits and use it to construct a pseudo-random generator that "stretches" a short string of truly random bits into a long string of "pseudo-random bits" that cannot be distinguished from uniform by small circuits. Such generators allow deterministic simulation of probabilistic classes. Loosely speaking, these constructions differ in: • The type of circuits "fooled" by the generator. To derandomize BPP and MA one needs to "fool" deterministic circuits, and to derandomize AM one needs to fool co-nondeterministic circuits. • The "stretch" of the generator. Generators with polynomial "stretch" (t bits to t c bits) are called "low-end" generators and give subexponential time deterministic simulation (e.g., BPP ⊆ SUBEXP = ∩ δ>0 DTIME(2 n δ ) or AM ⊆ NSUBEXP = ∩ δ>0 NTIME(2 n δ )). Generators with exponential stretch (t bits to 2 Ω(t) bits) are called "high-end" generators and give polynomial time deterministic simulation (e.g., BPP = P or AM = NP). • The precise assumption on the hard function. Typically "High-end" generators require lower bounds on larger circuits (circuits of size 2 Ω(n) ) whereas "Low-end" generators may require only super-polynomial lower bounds. Generators that fool co-nondeterministic circuits typically require hardness for conondeterministic circuits. Today, after a long line of research [BM84, Yao82, NW94, BFNW93, IW97, STV99, KvM99, MV99, ISW99, ISW00, SU01, Uma02], we have powerful and elegant constructions of "low-end" and "high-end" generators that derandomize BPP, MA and AM using "necessary assumptions" (i.e., assumptions that are implied by the existence of pseudo-random generators). The reader is referred to a recent survey paper on derandomization for more details [Kab02]. All the above mentioned Hardness vs. Randomness tradeoffs give generators which fool some nonuniform class of circuits and require a uniformly computable function that is hard against a non-uniform class of circuits. In fact, every generator against a non-uniform class of circuits implies such a function. We would like to mention that the nonuniform assumptions used in the tradeoffs mentioned above can be replaced by assumptions involving only uniform classes. It was shown by Karp and Lipton [KL79] (with improvements in [BFL91]) that if EXP = PH then there is a function in exponential time that is hard for polynomial size circuits (both deterministic and nondeterministic).
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s00037-003-0178-7">doi:10.1007/s00037-003-0178-7</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/zea4gq5hpjaqnlldauryyjdeqe">fatcat:zea4gq5hpjaqnlldauryyjdeqe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170921210552/http://www.cs.tau.ac.il/~amnon/Papers/GST.submitted.cc.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/5a/c8/5ac81455a9ce7503b45fb3acab577cc3a0cb4d2f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s00037-003-0178-7"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> springer.com </button> </a>