IA Scholar Query: Non-Commutative Formulas and Frege Lower Bounds: a New Characterization of Propositional Proofs.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgMon, 19 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Universal Proof Theory: Constructive Rules and Feasible Admissibility
https://scholar.archive.org/work/g52zarcdkzflnaaj4s2w7m7oje
Visser's rules have an essential role in intermediate logics. They form a basis for the admissible rules of intuitionistic logic and any intermediate logic in which they are admissible. In this paper, we follow the universal proof theory program introduced and developed in [1, 2, 24, 25] to establish a connection between the form of the rules in a sequent calculus for an intuitionistic modal logic and the admissibility of Visser's rules in that logic. More precisely, by investigating the form of the constructively acceptable rules, we first introduce a very general family of rules called the constructive rules. Then, defining a constructive sequent calculus as a calculus consisting of constructive rules and some basic modal rules, we prove that any constructive sequent calculus stronger than 𝐂𝐊 satisfying a mild technical condition, feasibly admits all Visser's rules, i.e., there is a polynomial time algorithm that reads a proof of the premise of a Visser's rule and provides a proof for its conclusion. This connection has two types of applications. On the positive side, it proves the feasible admissibility of Visser's rules in the sequent system for several intuitionistic modal logics, including 𝖢𝖪, 𝖨𝖪, their extensions by the usual modal axioms of T, B, 4, 5, the modal axioms of bounded width and depth and the propositional lax logic. On the negative side, though, it shows that if an intuitionistic modal logic satisfying a mild technical condition does not admit Visser's rules, then it cannot have a constructive sequent calculus.Amirhossein Akbar Tabatabai, Raheleh Jalaliwork_g52zarcdkzflnaaj4s2w7m7ojeMon, 19 Sep 2022 00:00:00 GMTPrograms as Diagrams: From Categorical Computability to Computable Categories
https://scholar.archive.org/work/symgo7adkvdfzhreu2ez7uykai
This is a draft of the first 7 chapters of a textbook/monograph that presents computability theory using string diagrams. The introductory chapters have been taught as graduate and undergraduate courses and evolved through 8 years of lecture notes. The later chapters contain new ideas and results about categorical computability and some first steps into computable category theory. The underlying categorical view of computation is based on monoidal categories with program evaluators, called *monoidal computers*. This categorical structure can be viewed as a single-instruction diagrammatic programming language called Run, whose only instruction is called RUN. This version: changed the title, worked on improving the text. (Also added lots of exercises and workouts, but that was overflowing the arxiv size bounds already in the earlier version, which is why the "workouts" and the "stories" are commented out.)Dusko Pavlovicwork_symgo7adkvdfzhreu2ez7uykaiSat, 10 Sep 2022 00:00:00 GMTSymmetry and Reformulation: On Intellectual Progress in Science and Mathematics
https://scholar.archive.org/work/janplqjfcrh6jlqax5yq7dgboy
Science and mathematics continually change in their tools, methods, and concepts. Many of these changes are not just modifications but progress---steps to be admired. But what constitutes progress? This dissertation addresses one central source of intellectual advancement in both disciplines: reformulating a problem-solving plan into a new, logically compatible one. For short, I call these cases of compatible problem-solving plans "reformulations." Two aspects of reformulations are puzzling. First, reformulating is often unnecessary. Given that we could already solve a problem using an older formulation, what do we gain by reformulating? Second, some reformulations are genuinely trivial or insignificant. Merely replacing one symbol with another does not lead to intellectual progress. What distinguishes significant reformulations from trivial ones? According to what I call "conceptualism" (or "conceptual empiricism"), reformulations are intellectually significant when they provide a different plan for solving problems. Significant reformulations provide inferentially different routes to the same solution. In contrast, trivial reformulations provide exactly the same problem-solving plans, and hence they do not change our understanding. This answers the second question about what distinguishes trivial from significant reformulations. However, the first question remains: what makes a new way of solving an old problem valuable? Here, a bevy of practical considerations come to mind: one formulation might be faster, less complicated, or use more familiar concepts. According to "instrumentalism," these practical benefits are all there is to reformulating. Some reformulations are simply more instrumentally valuable for meeting the aims of science than others. At another extreme, "fundamentalism" contends that a reformulation is valuable when it provides a more fundamental description of reality. According to this view, some reformulations directly contribute to the metaphysical aim of carving reality at its joints. Concep [...]Josh Hunt, University, Mywork_janplqjfcrh6jlqax5yq7dgboyTue, 06 Sep 2022 00:00:00 GMTBeyond Natural Proofs: Hardness Magnification and Locality
https://scholar.archive.org/work/pxzw5rfppzbopiwrw7cjwxevky
Hardness magnification reduces major complexity separations (such as \(\mathsf {\mathsf {EXP}} \nsubseteq \mathsf {NC}^1 \) ) to proving lower bounds for some natural problem Q against weak circuit models. Several recent works [11, 13, 14, 40, 42, 43, 46] have established results of this form. In the most intriguing cases, the required lower bound is known for problems that appear to be significantly easier than Q , while Q itself is susceptible to lower bounds but these are not yet sufficient for magnification. In this work, we provide more examples of this phenomenon, and investigate the prospects of proving new lower bounds using this approach. In particular, we consider the following essential questions associated with the hardness magnification program: – Does hardness magnification avoid the natural proofs barrier of Razborov and Rudich [51] ? – Can we adapt known lower bound techniques to establish the desired lower bound for Q ? We establish that some instantiations of hardness magnification overcome the natural proofs barrier in the following sense: slightly superlinear-size circuit lower bounds for certain versions of the minimum circuit size problem \({\sf MCSP} \) imply the non-existence of natural proofs. As the non-existence of natural proofs implies the non-existence of efficient learning algorithms, we show that certain magnification theorems not only imply strong worst-case circuit lower bounds but also rule out the existence of efficient learning algorithms. Hardness magnification might sidestep natural proofs, but we identify a source of difficulty when trying to adapt existing lower bound techniques to prove strong lower bounds via magnification. This is captured by a locality barrier : existing magnification theorems unconditionally show that the problems Q considered above admit highly efficient circuits extended with small fan-in oracle gates, while lower bound techniques against weak circuit models quite often easily extend to circuits containing such oracles. This explains why direct adaptations of certain lower bounds are unlikely to yield strong complexity separations via hardness magnification.Lijie Chen, Shuichi Hirahara, Igor C. Oliveira, Ján Pich, Ninad Rajgopal, Rahul Santhanamwork_pxzw5rfppzbopiwrw7cjwxevkyFri, 12 Aug 2022 00:00:00 GMTAnalyzing meaning: An introduction to semantics and pragmatics. Third edition
https://scholar.archive.org/work/bet372y6iba2rhvxwe3xfjz7ee
This book provides an introduction to the study of meaning in human language, from a linguistic perspective. It covers a fairly broad range of topics, including lexical semantics, compositional semantics, and pragmatics. The chapters are organized into six units: (1) Foundational concepts; (2) Word meanings; (3) Implicature (including indirect speech acts); (4) Compositional semantics; (5) Modals, conditionals, and causation; (6) Tense & aspect. Most of the chapters include exercises which can be used for class discussion and/or homework assignments, and each chapter contains references for additional reading on the topics covered. As the title indicates, this book is truly an introduction: it provides a solid foundation which will prepare students to take more advanced and specialized courses in semantics and/or pragmatics. It is also intended as a reference for fieldworkers doing primary research on under-documented languages, to help them write grammatical descriptions that deal carefully and clearly with semantic issues. The approach adopted here is largely descriptive and non-formal (or, in some places, semi-formal), although some basic logical notation is introduced. The book is written at level which should be appropriate for advanced undergraduate or beginning graduate students. It presupposes some previous coursework in linguistics, but does not presuppose any background in formal logic or set theory.Paul R. Kroegerwork_bet372y6iba2rhvxwe3xfjz7eeMon, 18 Jul 2022 00:00:00 GMTStructural Frameworks with Higher-Level Rules: Philosophical Investigations on the Foundations of Formal Reasoning
https://scholar.archive.org/work/3kgiafsne5bzdl5sifkfkbhpym
Diese Habilitationsschrift von 1987 entwickelt und behandelt die Idee von Schlussregeln höherer Stufe im Kontext der Aussagenlogik (mit und ohne Negation), der Logikprogrammierung, der Relevanzlogik und der Typentheorie Martin-Löfs.Peter Schroeder-Heister, Universitaet Tuebingenwork_3kgiafsne5bzdl5sifkfkbhpymMon, 27 Jun 2022 00:00:00 GMTTopos and Stacks of Deep Neural Networks
https://scholar.archive.org/work/xw4jwxtjbbfetawfm7jayjlwgi
Every known artificial deep neural network (DNN) corresponds to an object in a canonical Grothendieck's topos; its learning dynamic corresponds to a flow of morphisms in this topos. Invariance structures in the layers (like CNNs or LSTMs) correspond to Giraud's stacks. This invariance is supposed to be responsible of the generalization property, that is extrapolation from learning data under constraints. The fibers represent pre-semantic categories (Culioli, Thom), over which artificial languages are defined, with internal logics, intuitionist, classical or linear (Girard). Semantic functioning of a network is its ability to express theories in such a language for answering questions in output about input data. Quantities and spaces of semantic information are defined by analogy with the homological interpretation of Shannon's entropy of P.Baudot and D.Bennequin in 2015). They generalize the measures found by Carnap and Bar-Hillel (1952). Amazingly, the above semantical structures are classified by geometric fibrant objects in a closed model category of Quillen, then they give rise to homotopical invariants of DNNs and of their semantic functioning. Intentional type theories (Martin-Loef) organize these objects and fibrations between them. Information contents and exchanges are analyzed by Grothendieck's derivators.Jean-Claude Belfiore, Daniel Bennequinwork_xw4jwxtjbbfetawfm7jayjlwgiThu, 16 Jun 2022 00:00:00 GMTSituation Theory and Channel theory as a Unified Framework for Imperfect Information Management
https://scholar.archive.org/work/tvy5npf7jvam5i7cn5ogqox3ue
This article argues that the Situation theory and the Channel theory can be used as a general framework for Imperfect Information Management. Different kinds of imperfections are uncertainty, imprecision, vagueness, incompleteness, inconsistency, and context-dependency which can be handled pretty well by our brain. Basic approaches like probability theory and standard logic are intrinsically inefficient in modeling fallacious minds. The generalized probability and nonstandard logic theories have epistemological motivations to provide better models for information integration in cognitive agents. Among many models of them, possibility theory and probabilistic logic theory are the best approaches. I argue, based on a review of different approaches to Imperfect Information Management, that a good framework for it is the Situation theory of Barwise and the Channel theory of Barwise-Seligman. These theories have relied on a powerful and unique epistemological-based notion of information to refer to partiality. These frameworks have a proper approach for context modeling to handle common knowledge and incomplete information. Also, they distinguish belief from knowledge clearly to model the non-monotonic and dynamic nature of knowledge. They discern the logic of the world from information flow in the mind. The objectification process in these theories reveals to us the nature of default or probabilistic rules in perceptions. The concept of the channel can be used to represent those types of reasoning mechanisms that move from one model or logic to another one. The imprecision in our perceptions causes fuzziness in reasoning and vagueness in communication that can be represented by some suitable classifications connected by some channels. This new framework like a network framework can provide a scalable and open framework to cover different models based on a relativistic notion of truth.Farhad Naderianwork_tvy5npf7jvam5i7cn5ogqox3ueSun, 05 Jun 2022 00:00:00 GMTPoint-Dimension Theory (Part I): The Point-Extended Box Dimension
https://scholar.archive.org/work/yrd4zd24fbeqtjpj4diglkb2jy
This article is an introductory work to a larger research project devoted to pure, applied and philosophical aspects of dimension theory. It concerns a novel approach toward an alternate dimension theory foundation: the point-dimension theory. For this purpose, historical research on this notion and related concepts, combined with critical analysis and philosophical development proved necessary. Hence, our main objective is to challenge the conventional zero dimension assigned to the point. This reconsideration allows us to propose two new ways of conceiving the notion of dimension, which are the two sides of the same coin. First as an organization; accordingly, we suggest the existence of the Dimensionad, an elementary particle conferring dimension to objects and space-time. The idea of the existence of this particle could possibly adopted as a projection to create an alternative way to unify quantum mechanics and Einstein's general relativity. Secondly, in connection with Boltzmann and Shannon entropies, dimension appears essentially as a comparison between entropies of sets. Thus, we started from the point and succeeded in constructing a point-dimension notion allowing us to extend the principle of box dimension in many directions. More precisely, we introduce the notion of point-extended box dimension in the large framework of topological vector spaces, freeing it from the notion of metric. This general setting permits us to treat the case of finite, infinite and invisible dimensions. This first part of our research project focuses essentially on general properties and is particularly oriented towards establishing a well founded framework for infinite dimension. Among others, one prospect is to test the possibility of using other types of spaces as a setting for quantum mechanics, instead of limiting it to the exclusive Hilbertian framework.Nadir Maaroufi, El Hassan Zeroualiwork_yrd4zd24fbeqtjpj4diglkb2jyFri, 27 May 2022 00:00:00 GMTStabbing Planes
https://scholar.archive.org/work/sgbzuky7incr7mhgfzefobgzsm
We develop a new semi-algebraic proof system called Stabbing Planes which formalizes modern branch-and-cut algorithms for integer programming and is in the style of DPLL-based modern SAT solvers. As with DPLL there is only a single rule: the current polytope can be subdivided by branching on an inequality and its "integer negation." That is, we can (nondeterministically choose) a hyperplane ax ≥ b with integer coefficients, which partitions the polytope into three pieces: the points in the polytope satisfying ax ≥ b, the points satisfying ax ≤ b, and the middle slab b - 1 < ax < b. Since the middle slab contains no integer points it can be safely discarded, and the algorithm proceeds recursively on the other two branches. Each path terminates when the current polytope is empty, which is polynomial-time checkable. Among our results, we show that Stabbing Planes can efficiently simulate the Cutting Planes proof system, and is equivalent to a tree-like variant of the R(CP) system of Krajicek98. As well, we show that it possesses short proofs of the canonical family of systems of 𝔽_2-linear equations known as the Tseitin formulas. Finally, we prove linear lower bounds on the rank of Stabbing Planes refutations by adapting lower bounds in communication complexity and use these bounds in order to show that Stabbing Planes proofs cannot be balanced.Paul Beame, Noah Fleming, Russell Impagliazzo, Antonina Kolokolova, Denis Pankratov, Toniann Pitassi, Robert Roberework_sgbzuky7incr7mhgfzefobgzsmWed, 18 May 2022 00:00:00 GMTOn the proof complexity of logics of bounded branching
https://scholar.archive.org/work/qethsfzfgrcw7orhdvibujcnje
We investigate the proof complexity of extended Frege (EF) systems for basic transitive modal logics (K4, S4, GL, ...) augmented with the bounded branching axioms 𝐁𝐁_k. First, we study feasibility of the disjunction property and more general extension rules in EF systems for these logics: we show that the corresponding decision problems reduce to total coNP search problems (or equivalently, disjoint NP pairs, in the binary case); more precisely, the decision problem for extension rules is equivalent to a certain special case of interpolation for the classical EF system. Next, we use this characterization to prove superpolynomial (or even exponential, with stronger hypotheses) separations between EF and substitution Frege (SF) systems for all transitive logics contained in 𝐒4.2𝐆𝐫𝐳𝐁𝐁_2 or 𝐆𝐋.2𝐁𝐁_2 under some assumptions weaker than PSPACE NP. We also prove analogous results for superintuitionistic logics: we characterize the decision complexity of multi-conclusion Visser's rules in EF systems for Gabbay–de Jongh logics 𝐓_k, and we show conditional separations between EF and SF for all intermediate logics contained in 𝐓_2 + 𝐊𝐂.Emil Jeřábekwork_qethsfzfgrcw7orhdvibujcnjeMon, 16 May 2022 00:00:00 GMTA Combinatorial Theory of Compossibility in Leibniz's Metaphysics
https://scholar.archive.org/work/drngaudnefe3nljm5a3qhnjkom
Many contemporary metaphysicians think that for any two distinct things, it is always possible for them to coexist with one another. Leibniz gives a somewhat different answer: two distinct things are able to coexist with one another only when they are compossible. God cannot create all possible substances together because not all of them are compossible. But what is the basis within Leibniz's philosophy for the incompossibility of substances? This has been one of the most hotly contested issues in the recent secondary literature. Four kinds of interpretations have been presented. Logical interpretations maintain that compossibility is ultimately nothing but logical consistency. Advocates of logical interpretations argue that two possible substances are compossible just in case their complete concepts are logically consistent. In contrast, lawful, cosmological, and packing interpretations assume that possible substances are logically independent of one another. They maintain that any two possible substances are per se compossible. However, God is precluded from actualizing all possible substances by some non-logical constraints. The literature has long been dominated by variations of those four approaches. In this dissertation, however, I show that there is one important issue that has been largely overlooked: the compossibility relation is intransitive. Intransitivity will be problematic for all the above interpretations since they all seem to agree that the compossibility relation is transitive. According to logical interpretations, each possible substance is compossible with and only with its world-mates; thus, compossibility is an equivalence relation (reflexive, symmetric, and transitive). According to lawful, cosmological, and packing interpretations, the compossibility relation is trivially transitive since any two possible substances are per se compossible. However, there are passages where Leibniz suggests that the compossibility relation is intransitive. If compossibility is intransitive for him, then no [...]Jun Young Kimwork_drngaudnefe3nljm5a3qhnjkomThu, 17 Feb 2022 00:00:00 GMTElements of Mathematics
https://scholar.archive.org/work/3y6wiyaqqvhalhyfxfh2t7gmx4
The present chapter introduces the fields of mathematics that will be considered "elementary" in this book. They have all been considered "elementary" at some stage in the history of mathematics education, and they are all still taught at school level in some places today. But even "elementary" topics have their mysteries and difficulties, which call for explanation from a "higher standpoint." As we will show, this applies to the topics considered by Klein (1908)—arithmetic, algebra, analysis, and geometry—plus a few other topics that existed only in embryonic form in 1908 but are quite mature today.John Stillwellwork_3y6wiyaqqvhalhyfxfh2t7gmx4Mon, 14 Feb 2022 00:00:00 GMTIdeals, Determinants, and Straightening: Proving and Using Lower Bounds for Polynomial Ideals
https://scholar.archive.org/work/t6rhkmapcvfqhoyyq2odufbdxe
We show that any nonzero polynomial in the ideal generated by the r × r minors of an n × n matrix X can be used to efficiently approximate the determinant. For any nonzero polynomial f in this ideal, we construct a small depth-three f-oracle circuit that approximates the determinant of size Θ(r^1/3) in the sense of border complexity. For many classes of algebraic circuits, this implies that every nonzero polynomial in the ideal generated by r × r minors is at least as hard to approximately compute as the determinant of size Θ(r^1/3). We also prove an analogous result for the Pfaffian of a 2n × 2n skew-symmetric matrix and the ideal generated by Pfaffians of 2r × 2r principal submatrices. This answers a recent question of Grochow about complexity in polynomial ideals in the setting of border complexity. We give several applications of our result, two of which are highlighted below. ∙ We prove super-polynomial lower bounds for Ideal Proof System refutations computed by low-depth circuits. This extends the recent breakthrough low-depth circuit lower bounds of Limaye, Srinivasan, and Tavenas to the setting of proof complexity. For many natural circuit classes, we show that the approximative proof complexity of our hard instance is governed by the approximative circuit complexity of the determinant. ∙ We construct new hitting set generators for polynomial-size low-depth circuits. For any ε > 0, we construct generators with seed length O(n^ε) that attain a near-optimal tradeoff between their seed length and degree, and are computable by low-depth circuits of near-linear size (with respect to the size of their output). This matches the seed length of the generators recently obtained by Limaye, Srinivasan, and Tavenas, but improves on the generator's degree and circuit complexity.Robert Andrews, Michael A. Forbeswork_t6rhkmapcvfqhoyyq2odufbdxeWed, 01 Dec 2021 00:00:00 GMTLearning algorithms versus automatability of Frege systems
https://scholar.archive.org/work/dyywfaeji5b5fky5aj6b5xipuy
We connect learning algorithms and algorithms automating proof search in propositional proof systems: for every sufficiently strong, well-behaved propositional proof system P, we prove that the following statements are equivalent, 1. Provable learning: P proves efficiently that p-size circuits are learnable by subexponential-size circuits over the uniform distribution with membership queries. 2. Provable automatability: P proves efficiently that P is automatable by non-uniform circuits on propositional formulas expressing p-size circuit lower bounds. Here, P is sufficiently strong and well-behaved if I.-III. holds: I. P p-simulates Jeřábek's system WF (which strengthens the Extended Frege system EF by a surjective weak pigeonhole principle); II. P satisfies some basic properties of standard proof systems which p-simulate WF; III. P proves efficiently for some Boolean function h that h is hard on average for circuits of subexponential size. For example, if III. holds for P=WF, then Items 1 and 2 are equivalent for P=WF. If there is a function h∈ NE∩ coNE which is hard on average for circuits of size 2^n/4, for each sufficiently big n, then there is an explicit propositional proof system P satisfying properties I.-III., i.e. the equivalence of Items 1 and 2 holds for P.Ján Pich, Rahul Santhanamwork_dyywfaeji5b5fky5aj6b5xipuySat, 20 Nov 2021 00:00:00 GMTThe lattice of super-Belnap logics
https://scholar.archive.org/work/75c2iolwy5btfjcp5njaetc5pu
We study the lattice of extensions of four-valued Belnap--Dunn logic, called super-Belnap logics by analogy with superintuitionistic logics. We describe the global structure of this lattice by splitting it into several subintervals, and prove some new completeness theorems for super-Belnap logics. The crucial technical tool for this purpose will be the so-called antiaxiomatic (or explosive) part operator. The antiaxiomatic (or explosive) extensions of Belnap--Dunn logic turn out to be of particular interest owing to their connection to graph theory: the lattice of finitary antiaxiomatic extensions of Belnap--Dunn logic is iso\-morphic to the lattice of upsets in the homomorphism order on finite graphs (with loops allowed). In particular, there is a continuum of finitary super-Belnap logics. Moreover, a non-finitary super-Belnap logic can be constructed with the help of this isomorphism. As algebraic corollaries we obtain the existence of a continuum of antivarieties of De Morgan algebras and the existence of a prevariety of De Morgan algebras which is not a quasivariety.Adam Přenosilwork_75c2iolwy5btfjcp5njaetc5puThu, 18 Nov 2021 00:00:00 GMTComputational Complexity of Deciding Provability in Linear Logic and its Fragments
https://scholar.archive.org/work/ylszs2gw4zcfpl6kvo7qbbj2oe
Linear logic was conceived in 1987 by Girard and, in contrast to classical logic, restricts the usage of the structural inference rules of weakening and contraction. With this, atoms of the logic are no longer interpreted as truth, but as information or resources. This interpretation makes linear logic a useful tool for formalisation in mathematics and computer science. Linear logic has, for example, found applications in proof theory, quantum logic, and the theory of programming languages. A central problem of the logic is the question whether a given list of formulas is provable with the calculus. In the research regarding the complexity of this problem, some results were achieved, but other questions are still open. To present these questions and give new perspectives, this thesis consists of three main parts which build on each other: We present the syntax, proof theory, and various approaches to a semantics for linear logic. Here already, we will meet some open research questions. We present the current state of the complexity-theoretic characterization of the most important fragments of linear logic. Here, further research problems are presented and it becomes apparent that until now, the results have all made use of different approaches. We prove an original complexity characterization of a fragment of the logic and present ideas for a new, structural approach to the examination of provability in linear logic.Florian Chudigiewitschwork_ylszs2gw4zcfpl6kvo7qbbj2oeTue, 28 Sep 2021 00:00:00 GMT