Filters








49 Hits in 2.8 sec

Complexity Barriers as Independence [chapter]

Antonina Kolokolova
2017 The Incomputable  
After many years of effort, the main questions of complexity theory remain unresolved, even though the concepts involved are simple. Understanding the main idea behind the statement of the "P vs NP" problem does not require much background ("is it easier to check answers than to produce them?"). Yet, we are as far from resolving it as ever. Much work has been done to unravel the intricate structure in the complexity world, the "complexity zoo" contains hosts of inhabitants. But the main
more » ... s are still elusive. So a natural question comes to mind: is there any intrinsic reason why this is still unknown? Is there any rationale why the proofs are out of our reach? Maybe we are not using the right techniques -or maybe we are not pushing our techniques far enough? After trying to prove a statement and failing we try to prove its negation; after failing at that, as well, we resort to looking for an explanation that might give us a hint at why our attempts are failing. Indeed, in the world of computational complexity there have been several results of this nature: results that state that current techniques are, in a precise mathematical sense, insufficient to resolve the main open problems. We call these results "barriers". A pessimistic view of the barrier results would be that the questions are hard, intrinsically hard. But there is a more optimistic way of interpreting them. The fact that certain classes of proof techniques, ones that have specific properties, are eliminated gives us a direction to search for new techniques. It gives us a method for discovering ways of approaching questions in places where we might not have been looking, if not for the barrier results. In this paper, we will focus on three major complexity barriers: Relativization [BGS75], Algebrization [AW09], and Natural Proofs [RR97]. Interestingly enough, all three of those can be recast in the framework of independence of a theory of logic. That is, theories can be constructed which formalize (almost) all known techniques, yet for which the main open questions of complexity theory are independent.
doi:10.1007/978-3-319-43669-2_10 fatcat:i3d4hczuajb5fkwgsw3q4g3h7a

Approximating solution structure of the Weighted Sentence Alignment problem [article]

Antonina Kolokolova, Renesa Nizamee
2014 arXiv   pre-print
We study the complexity of approximating solution structure of the bijective weighted sentence alignment problem of DeNero and Klein (2008). In particular, we consider the complexity of finding an alignment that has a significant overlap with an optimal alignment. We discuss ways of representing the solution for the general weighted sentence alignment as well as phrases-to-words alignment problem, and show that computing a string which agrees with the optimal sentence partition on more than
more » ... (plus an arbitrarily small polynomial fraction) positions for the phrases-to-words alignment is NP-hard. For the general weighted sentence alignment we obtain such bound from the agreement on a little over 2/3 of the bits. Additionally, we generalize the Hamming distance approximation of a solution structure to approximating it with respect to the edit distance metric, obtaining similar lower bounds.
arXiv:1409.2433v1 fatcat:yxzmj42tzzaufg6k7rg3u7i36u

Stabbing Planes [article]

Paul Beame, Noah Fleming, Russell Impagliazzo, Antonina Kolokolova, Denis Pankratov, Toniann Pitassi, Robert Robere
2022 arXiv   pre-print
We develop a new semi-algebraic proof system called Stabbing Planes which formalizes modern branch-and-cut algorithms for integer programming and is in the style of DPLL-based modern SAT solvers. As with DPLL there is only a single rule: the current polytope can be subdivided by branching on an inequality and its "integer negation." That is, we can (nondeterministically choose) a hyperplane ax ≥ b with integer coefficients, which partitions the polytope into three pieces: the points in the
more » ... ope satisfying ax ≥ b, the points satisfying ax ≤ b, and the middle slab b - 1 < ax < b. Since the middle slab contains no integer points it can be safely discarded, and the algorithm proceeds recursively on the other two branches. Each path terminates when the current polytope is empty, which is polynomial-time checkable. Among our results, we show that Stabbing Planes can efficiently simulate the Cutting Planes proof system, and is equivalent to a tree-like variant of the R(CP) system of Krajicek98. As well, we show that it possesses short proofs of the canonical family of systems of 𝔽_2-linear equations known as the Tseitin formulas. Finally, we prove linear lower bounds on the rank of Stabbing Planes refutations by adapting lower bounds in communication complexity and use these bounds in order to show that Stabbing Planes proofs cannot be balanced.
arXiv:1710.03219v2 fatcat:4dedtl7iordyvcmquixcpwmd6y

The Proof Complexity of SMT Solvers [chapter]

Robert Robere, Antonina Kolokolova, Vijay Ganesh
2018 Lecture Notes in Computer Science  
The resolution proof system has been enormously helpful in deepening our understanding of conflict-driven clause-learning (CDCL) SAT solvers. In the interest of providing a similar proof complexitytheoretic analysis of satisfiability modulo theories (SMT) solvers, we introduce a generalization of resolution called Res(T). We show that many of the known results comparing resolution and CDCL solvers lift to the SMT setting, such as the result of Pipatsrisawat and Darwiche showing that CDCL
more » ... with "perfect" non-deterministic branching and an asserting clause-learning scheme can polynomially simulate general resolution. We also describe a stronger version of Res(T), Res * (T), capturing SMT solvers allowing introduction of new literals. We analyze the theory EUF of equality with uninterpreted functions, and show that the Res * (EUF) system is able to simulate an earlier calculus introduced by Bjørner and de Moura for the purpose of analyzing DPLL(EUF). Further, we show that Res * (EUF) (and thus SMT algorithms with clause learning over EUF, new literal introduction rules and perfect branching) can simulate the Frege proof system, which is well-known to be far more powerful than resolution. Finally, we prove under the Exponential Time Hypothesis (ETH) that any reduction from EUF to SAT (such as the Ackermann reduction) must, in the worst case, produce an instance of size Ω(n log n) from an instance of size n.
doi:10.1007/978-3-319-96142-2_18 fatcat:jgsweuz5rbbghf2p2ofiqagpc4

Closure Properties of Weak Systems of Bounded Arithmetic [chapter]

Antonina Kolokolova
2005 Lecture Notes in Computer Science  
In this paper we study the properties of systems of bounded arithmetic capturing small complexity classes and state conditions sufficient for such systems to capture the corresponding complexity class tightly. Our class of systems of bounded arithmetic is the class of secondorder systems with comprehension axiom for a syntactically restricted class of formulas Φ ⊂ Σ B 1 based on a logic in the descriptive complexity setting. This work generalizes the results of [8] and [9] 1 . We show that if
more » ... e system 1) extends V0 (second-order version of I∆0), 2) ∆1-defines all functions with bitgraphs from Φ, and 3) proves witnessing for all theorems from Φ, then the class of Σ B 1 -definable functions of the resulting system is exactly the class expressed by Φ in the descriptive complexity setting, provably in this system.
doi:10.1007/11538363_26 fatcat:wptivaza2nbmbkcrdmnig3eo6e

On the Complexity of Model Expansion [chapter]

Antonina Kolokolova, Yongmei Liu, David Mitchell, Eugenia Ternovska
2010 Lecture Notes in Computer Science  
We study the complexity of model expansion (MX), which is the problem of expanding a given finite structure with additional relations to produce a finite model of a given formula. This is the logical task underlying many practical constraint languages and systems for representing and solving search problems, and our work is motivated by the need to provide theoretical foundations for these. We present results on both data and combined complexity of MX for several fragments and extensions of FO
more » ... hat are relevant for this purpose, in particular the guarded fragment GF k of FO and extensions of FO and GF k with inductive definitions. We present these in the context of the two closely related, but more studied, problems of model checking and finite satisfiability. To obtain results on FO(ID), the extension of FO with inductive definitions, we provide translations between FO(ID) with FO(LFP), which are of independent interest.
doi:10.1007/978-3-642-16242-8_32 fatcat:3rbwgsab3ba6fifzkw4v6d4vmq

Mining Circuit Lower Bound Proofs for Meta-Algorithms

Ruiwen Chen, Valentine Kabanets, Antonina Kolokolova, Ronen Shaltiel, David Zuckerman
2015 Computational Complexity  
We show that circuit lower bound proofs based on the method of random restrictions yield non-trivial compression algorithms for "easy" Boolean functions from the corresponding circuit classes. The compression problem is defined as follows: given the truth table of an n-variate Boolean function f computable by some unknown small circuit from a known class of circuits, find in deterministic time poly(2 n ) a circuit C (no restriction on the type of C) computing f so that the size of C is less
more » ... the trivial circuit size 2 n /n. We get nontrivial compression for functions computable by AC 0 circuits, (de Morgan) formulas, and (read-once) branching programs of the size for which the lower bounds for the corresponding circuit class are known. These compression algorithms rely on the structural characterizations of "easy" functions, which are useful both for proving circuit lower bounds and for designing "meta-algorithms" (such as Circuit-SAT). For (de Morgan) formulas, such structural characterization is provided by the "shrinkage under random restrictions" results [52], [21], strengthened to the "high-probability" version by [48] , [26] , [33] . We give a new, simple proof of the "high-probability" version of the shrinkage result for (de Morgan) formulas, with improved parameters. We use this shrinkage result to get both compression and #SAT algorithms for (de Morgan) formulas of size about n 2 . We also use this shrinkage result to get an alternative proof of the recent result by Komargodski and Raz [33] of the average-case lower bound against small (de Morgan) formulas. Finally, we show that the existence of any non-trivial compression algorithm for a circuit class C ⊆ P/poly would imply the circuit lower bound NEXP ⊆ C. This complements Williams's result [55] that any non-trivial Circuit-SAT algorithm for a circuit class C would imply a superpolynomial lower bound against C for a language in NEXP 1 .
doi:10.1007/s00037-015-0100-0 fatcat:r625fgu375arxokmoistrnflgu

GANs Reels: Creating Irish Music using a Generative Adversarial Network [article]

Antonina Kolokolova, Mitchell Billard, Robert Bishop, Moustafa Elsisy, Zachary Northcott, Laura Graves, Vineel Nagisetty, Heather Patey
2020 arXiv   pre-print
In this paper we present a method for algorithmic melody generation using a generative adversarial network without recurrent components. Music generation has been successfully done using recurrent neural networks, where the model learns sequence information that can help create authentic sounding melodies. Here, we use DC-GAN architecture with dilated convolutions and towers to capture sequential information as spatial image information, and learn long-range dependencies in fixed-length melody forms such as Irish traditional reel.
arXiv:2010.15772v1 fatcat:ifquy5qd5zebjgtjgafabu33ku

An axiomatic approach to algebrization

Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova
2009 Proceedings of the 41st annual ACM symposium on Symposium on theory of computing - STOC '09  
Non-relativization of complexity issues can be interpreted as giving some evidence that these issues cannot be resolved by "black-box" techniques. In the early 1990's, a sequence of important non-relativizing results was proved, mainly using algebraic techniques. Two approaches have been proposed to understand the power and limitations of these algebraic techniques: (1) Fortnow [12] gives a construction of a class of oracles which have a similar algebraic and logical structure, although they
more » ... arbitrarily powerful. He shows that many of the non-relativizing results proved using algebraic techniques hold for all such oracles, but he does not show, e.g., that the outcome of the "P vs. NP" question differs between different oracles in that class. (2) Aaronson and Wigderson [1] give definitions of algebrizing separations and collapses of complexity classes, by comparing classes relative to one oracle to classes relative to an algebraic extension of that oracle. Using these definitions, they show both that the standard collapses and separations "algebrize" and that many of the open questions in complexity fail to "algebrize", suggesting that the arithmetization technique is close to its limits. However, it is unclear how to formalize algebrization of more complicated complexity statements than collapses or separations, and whether the algebrizing statements are, e.g., closed under modus ponens; so it is conceivable that several algebrizing premises could imply (in a relativizing way) a non-algebrizing conclusion. In this paper, building on the work of Arora, Impagliazzo, and Vazirani [4], we propose an axiomatic approach to "algebrization", which complements and clarifies the approaches of [12] and [1]. We present logical theories formalizing the notion of algebrizing techniques in the following sense: most known complexity results proved using arithmetization are provable within our theories, while many open questions are * independent of the theories. So provability in the proposed theories can serve as a surrogate for provability using the arithmetization technique. Our theories extend the [4] theory with a new axiom, Arithmetic Checkability which intuitively says that all NP languages have verifiers that are efficiently computable lowdegree polynomials (over the integers). We show the following: (i) Arithmetic checkability holds relative to arbitrarily powerful oracles (since Fortnow's algebraic oracles from [12] all satisfy the Arithmetic Checkability axiom). (ii) Most of the algebrizing collapses and separations from [1], such as IP = PSPACE, NP ⊂ ZKIP if one-way functions exist, MA-EXP ⊂ P/poly, etc., are provable from Arithmetic Checkability. (iii) Many of the open complexity questions (including most of those shown to require non-algebrizing techniques in [1]), such as "P vs. NP", "NP vs. BPP", etc., cannot be proved from Arithmetic Checkability. (iv) Arithmetic Checkability is also insufficient to prove one known result, NEXP = MIP (although relative to an oracle satisfying Arithmetic Checkability, NEXP O restricted to polylength queries is contained in MIP O , mirroring a similar result from [1]).
doi:10.1145/1536414.1536509 dblp:conf/stoc/ImpagliazzoKK09 fatcat:o6tthbtumnde3criihmkxnzxbm

Mining Circuit Lower Bound Proofs for Meta-algorithms

Ruiwen Chen, Valentine Kabanets, Antonina Kolokolova, Ronen Shaltiel, David Zuckerman
2014 2014 IEEE 29th Conference on Computational Complexity (CCC)  
We show that circuit lower bound proofs based on the method of random restrictions yield non-trivial compression algorithms for "easy" Boolean functions from the corresponding circuit classes. The compression problem is defined as follows: given the truth table of an n-variate Boolean function f computable by some unknown small circuit from a known class of circuits, find in deterministic time poly(2 n ) a circuit C (no restriction on the type of C) computing f so that the size of C is less
more » ... the trivial circuit size 2 n /n. We get nontrivial compression for functions computable by AC 0 circuits, (de Morgan) formulas, and (read-once) branching programs of the size for which the lower bounds for the corresponding circuit class are known. These compression algorithms rely on the structural characterizations of "easy" functions, which are useful both for proving circuit lower bounds and for designing "meta-algorithms" (such as Circuit-SAT). For (de Morgan) formulas, such structural characterization is provided by the "shrinkage under random restrictions" results [52], [21], strengthened to the "high-probability" version by [48] , [26] , [33] . We give a new, simple proof of the "high-probability" version of the shrinkage result for (de Morgan) formulas, with improved parameters. We use this shrinkage result to get both compression and #SAT algorithms for (de Morgan) formulas of size about n 2 . We also use this shrinkage result to get an alternative proof of the recent result by Komargodski and Raz [33] of the average-case lower bound against small (de Morgan) formulas. Finally, we show that the existence of any non-trivial compression algorithm for a circuit class C ⊆ P/poly would imply the circuit lower bound NEXP ⊆ C. This complements Williams's result [55] that any non-trivial Circuit-SAT algorithm for a circuit class C would imply a superpolynomial lower bound against C for a language in NEXP 1 .
doi:10.1109/ccc.2014.34 dblp:conf/coco/ChenKKSZ14 fatcat:7sv5lgqvh5he7h2x7k4i67auj4

Agnostic Learning from Tolerant Natural Proofs

Marco L. Carmosino, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, Marc Herbstritt
2017 International Workshop on Approximation Algorithms for Combinatorial Optimization  
Kolokolova 35:7 Definition 4 (Tolerant Natural Property).  ...  Kolokolova 35:19 Lemma 15 (Tolerant natural property for AC 0 [q]).  ... 
doi:10.4230/lipics.approx-random.2017.35 dblp:conf/approx/CarmosinoIKK17 fatcat:vewhdm7sjzhsjmn2ljblst2ezi

A second-order system for polytime reasoning based on Grädel's theorem

Stephen Cook, Antonina Kolokolova
2003 Annals of Pure and Applied Logic  
We introduce a second-order system V1-Horn of bounded arithmetic formalizing polynomialtime reasoning, based on Gr adel's (Theoret. Comput. Sci. 101 (1992) 35) second-order Horn characterization of P. Our system has comprehension over P predicates (deÿned by Gr adel's second-order Horn formulas), and only ÿnitely many function symbols. Other systems of polynomial-time reasoning either allow induction on NP predicates (such as Buss's S 1 2 or the second-order V 1 1 ), and hence are more powerful
more » ... than our system (assuming the polynomial hierarchy does not collapse), or use Cobham's theorem to introduce function symbols for all polynomial-time functions (such as Cook's PV and Zambella's P-def). We prove that our system is equivalent to QPV and Zambella's P-def. Using our techniques, we also show that V1-Horn is ÿnitely axiomatizable, and, as a corollary, that the class of ∀ b 1 consequences of S 1 2 is ÿnitely axiomatizable as well, thus answering an open question.
doi:10.1016/s0168-0072(03)00056-3 fatcat:xx73pukpxfg4fp7hjyxrpqopoa

On the Hierarchical Community Structure of Practical Boolean Formulas [article]

Chunxiao Li, Jonathan Chung, Soham Mukherjee, Marc Vinyals, Noah Fleming, Antonina Kolokolova, Alice Mu, Vijay Ganesh
2021 arXiv   pre-print
Modern CDCL SAT solvers easily solve industrial instances containing tens of millions of variables and clauses, despite the theoretical intractability of the SAT problem. This gap between practice and theory is a central problem in solver research. It is believed that SAT solvers exploit structure inherent in industrial instances, and hence there have been numerous attempts over the last 25 years at characterizing this structure via parameters. These can be classified as rigorous, i.e., they
more » ... ve as a basis for complexity-theoretic upper bounds (e.g., backdoors), or correlative, i.e., they correlate well with solver run time and are observed in industrial instances (e.g., community structure). Unfortunately, no parameter proposed to date has been shown to be both strongly correlative and rigorous over a large fraction of industrial instances. Given the sheer difficulty of the problem, we aim for an intermediate goal of proposing a set of parameters that is strongly correlative and has good theoretical properties. Specifically, we propose parameters based on a graph partitioning called Hierarchical Community Structure (HCS), which captures the recursive community structure of a graph of a Boolean formula. We show that HCS parameters are strongly correlative with solver run time using an Empirical Hardness Model, and further build a classifier based on HCS parameters that distinguishes between easy industrial and hard random/crafted instances with very high accuracy. We further strengthen our hypotheses via scaling studies. On the theoretical side, we show that counterexamples which plagued community structure do not apply to HCS, and that there is a subset of HCS parameters such that restricting them limits the size of embeddable expanders.
arXiv:2103.14992v2 fatcat:3zt3wzba7zcg5fbq3ejlfsqiiq

LEARN-Uniform Circuit Lower Bounds and Provability in Bounded Arithmetic

Marco Carmosino, Valentine Kabanets, Antonina Kolokolova, Igor C. Oliveira
2022 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)  
How to cite: Please refer to published version for the most recent bibliographic citation information.
doi:10.1109/focs52979.2021.00080 fatcat:crwtv7f365dv3g6j6kmtanxwie

AC^0[p] Lower Bounds Against MCSP via the Coin Problem

Alexander Golovnev, Rahul Ilango, Russell Impagliazzo, Valentine Kabanets, Antonina Kolokolova, Avishay Tal, Michael Wagner
2019 International Colloquium on Automata, Languages and Programming  
Kolokolova, and A. Tal 66:11 and x N +1 = ⊥.  ...  Kolokolova, and A. Tal 66:13 away from 1/2).  ... 
doi:10.4230/lipics.icalp.2019.66 dblp:conf/icalp/GolovnevIIKKT19 fatcat:a34e3ayp6fesnevmeistfl55yi
« Previous Showing results 1 — 15 out of 49 results