Why are Proof Complexity Lower Bounds Hard?

Jan Pich, Rahul Santhanam
2019 2019 IEEE 60th Annual Symposium on Foundations of Computer Science (FOCS)  
We formalize and study the question of whether there are inherent difficulties to showing lower bounds on propositional proof complexity. We establish the following unconditional result: Propositional proof systems cannot efficiently show that truth tables of random Boolean functions lack polynomial size nonuniform proofs of hardness. Assuming a conjecture of Rudich, propositional proof systems also cannot efficiently show that random k-CNFs of linear density lack polynomial size non-uniform
more » ... ofs of unsatisfiability. Since the statements in question assert the average-case hardness of standard NP problems (MCSP and 3-SAT respectively) against co-nondeterministic circuits for natural distributions, one interpretation of our result is that propositional proof systems are inherently incapable of efficiently proving strong complexity lower bounds in our formalization. Another interpretation is that an analogue of the Razborov-Rudich 'natural proofs' barrier holds in proof complexity: under reasonable hardness assumptions, there are natural distributions on hard tautologies for which it is infeasible to show proof complexity lower bounds for strong enough proof systems. For the specific case of the Extended Frege (EF) propositional proof system, we show that at least one of the following cases holds: (1) EF has no efficient proofs of circuit lower bound tautologies for any Boolean function or (2) There is an explicit family of tautologies of each length such that under reasonable hardness assumptions, most tautologies in the family are hard but no propositional proof system can efficiently establish hardness for most tautologies in the family. Thus, under reasonable hardness assumptions, either the Circuit Lower Bounds program toward complexity separations cannot be implemented in EF, or there are inherent obstacles to implementing the Cook-Reckhow program for EF. * janpich@yahoo.com † rahul.santhanam@cs.ox.ac.uk Motivation Complexity theory is full of questions that are easy to state but hard to answer. The most famous of these is the P vs NP problem [14] , but there are numerous others such as the NP vs coNP problem, the PSPACE vs P problem, and the BPP vs P problem. In all of these cases, despite decades of effort, very little progress has been made. Is this because complexity theory is a young field, and we have not yet had the time to develop a deep understanding of computation and its limits? Or are these problems fundamentally intractable in some sense? Since complexity theory is the theory of intractability, it is natural to apply it to the seeming intractability of complexity-theoretic questions themselves. Since the early days of complexity theory, progress on complexity lower bounds has gone hand-in-hand with the formalization of various sorts of barriers to progress. In the 70s, analogies between complexity theory and recursion theory were developed, with various concepts and techniques from recursion theory being adapted to the resource-bounded setting. However, Baker, Gill & Solovay [7] observed in the late 70s that popular machine simulation and diagonalization techniques from recursion theory relativized, i.e., continued to work even when machines were given access to an arbitrary oracle. By giving an oracle relative to which P = NP and another oracle relative to which P = NP [7], they proved that no relativizing techniques could solve the NP vs P question. It seemed that techniques of a fundamentally different sort were required. After this barrier result, attention shifted to a more finitistic setting. Rather than considering uniform machines that work for all inputs, Boolean circuits corresponding to finite functions became the object of study. It is well-known that P can be simulated by polynomialsize Boolean circuits, and therefore super-polynomial lower bounds on Boolean circuit size imply lower bounds against P. Perhaps the hope was that circuits are simpler and more 'combinatorial' objects, and therefore more amenable to lower bounds via combinatorial and algebraic techniques. Indeed, initial results were promising. In a series of influential works [2, 21, 48, 24] applying the technique of random restrictions, super-polynomial lower bounds were shown for the Parity function against constant-depth circuits. Razborov [40] and Smolensky [47] developed the polynomial approximation technique to give lower bounds against constant-depth circuits with prime modular gates. Razborov [39] used the method of approximations to show that the Clique problem required super-polynomial size monotone circuits. This sequence of works introduced several new lower bound techniques, and it seemed that steady progress was being made to the goal of separating NP and P via a Circuit Lower Bounds Program. Circuit Lower Bounds Program: Separate NP and P by proving super-polynomial lower bounds for functions in NP against more and more expressive classes of Boolean circuits. Unfortunately, the Circuit Lower Bounds Program stalled in the early 90s. Even now, almost 30 years later, we still don't know if there are explicit functions that require super-polynomial depth-two threshold circuits, or constant-depth circuits with Mod 6 gates. However, even if
doi:10.1109/focs.2019.00080 dblp:conf/focs/PichS19 fatcat:4of45zsirzd6ldkebom27aeaqy