IA Scholar Query: Rewriting with a Nondeterministic Choice Operator: From Algebra to Proofs.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 29 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Optimizing Regular Expressions via Rewrite-Guided Synthesis
https://scholar.archive.org/work/edegnurhmfc4nfxy2fvzlwi67m
Regular expressions are pervasive in modern systems. Many real-world regular expressions are inefficient, sometimes to the extent that they are vulnerable to complexity-based attacks, and while much research has focused on detecting inefficient regular expressions or accelerating regular expression matching at the hardware level, we investigate automatically transforming regular expressions to remove inefficiencies. We reduce this problem to general expression optimization, an important task necessary in a variety of domains even beyond compilers, e.g., digital logic design, etc. Syntax-guided synthesis (SyGuS) with a cost function can be used for this purpose, but ordered enumeration through a large space of candidate expressions can be prohibitively expensive. Equality saturation is an alternative approach which allows efficient construction and maintenance of expression equivalence classes generated by rewrite rules, but the procedure may not reach saturation, meaning global minimality cannot be confirmed. We present a new approach called rewrite-guided synthesis (ReGiS), in which a unique interplay between SyGuS and equality saturation-based rewriting helps to overcome these problems, resulting in an efficient, scalable framework for expression optimization.Jedidiah McClurg, Miles Claver, Jackson Garner, Jake Vossen, Jordan Schmerge, Mehmet E. Belviranliwork_edegnurhmfc4nfxy2fvzlwi67mThu, 29 Sep 2022 00:00:00 GMTAn Algebraic-Geometry Approach to Prime Factorization
https://scholar.archive.org/work/6ueylwcwybf7notmwqvuw626a4
New algorithms for prime factorization that outperform the existing ones or take advantage of particular properties of the prime factors can have a practical impact on present implementations of cryptographic algorithms that rely on the complexity of factorization. Currently used keys are chosen on the basis of the present algorithmic knowledge and, thus, can potentially be subject to future breaches. For this reason, it is worth to investigate new approaches which have the potentiality of giving a computational advantage. The problem has also relevance in quantum computation, as an efficient quantum algorithm for prime factorization already exists. Thus, better classical asymptotic complexity can provide a better understanding of the advantages offered by quantum computers. In this paper, we reduce the factorization problem to the search of points of parametrizable varieties, in particular curves, over finite fields. The varieties are required to have an arbitrarily large number of intersection points with some hypersurface over the base field. For a subexponential or poly- nomial factoring complexity, the number of parameters have to scale sublinearly in the space dimension n and the complexity of computing a point given the parameters has to be subexponential or polynomial, respectively. We outline a procedure for building these varieties, which is illustrated with two constructions. In one case, we show that there are varieties whose points can be evaluated efficiently given a number of parameters not greater than n/2. In the other case, the bound is dropped to n/3. Incidentally, the first construction resembles a kind of retro-causal model. Retro-causality is considered one possible explanation of quantum weirdness.Alberto Montina, Stefan Wolfwork_6ueylwcwybf7notmwqvuw626a4Fri, 23 Sep 2022 00:00:00 GMTRethinking the notion of oracle
https://scholar.archive.org/work/426m2pxnjrasbawctyc2hxvh4u
We present three different perspectives of oracle. First, an oracle is a blackbox; second, an oracle is an endofunctor on the category of represented spaces; and third, an oracle is an operation on the object of truth-values. These three perspectives create a link between the three fields, computability theory, synthetic descriptive set theory, and effective topos theory.Takayuki Kiharawork_426m2pxnjrasbawctyc2hxvh4uMon, 19 Sep 2022 00:00:00 GMTA case for DOT: Theoretical Foundations for Objects With Pattern Matching and GADT-style Reasoning
https://scholar.archive.org/work/ll2tdshywzdkdkizvfbpyxqmey
Many programming languages in the OO tradition now support pattern matching in some form. Historical examples include Scala and Ceylon, with the more recent additions of Java, Kotlin, TypeScript, and Flow. But pattern matching on generic class hierarchies currently results in puzzling type errors in most of these languages. Yet this combination of features occurs naturally in many scenarios, such as when manipulating typed ASTs. To support it properly, compilers needs to implement a form of subtyping reconstruction: the ability to reconstruct subtyping information uncovered at runtime during pattern matching. We introduce cDOT, a new calculus in the family of Dependent Object Types (DOT) intended to serve as a formal foundation for subtyping reconstruction. Being descended from pDOT, itself a formal foundation for Scala, cDOT can be used to encode advanced object-oriented features such as generic inheritance, type constructor variance, F-bounded polymorphism, and first-class recursive modules. We demonstrate that subtyping reconstruction subsumes GADTs by encoding λ_2,Gμ, a classical constraint-based GADT calculus, into cDOT.Aleksander Boruch-Gruszecki, Radosław Waśko, Yichen Xu, Lionel Parreauxwork_ll2tdshywzdkdkizvfbpyxqmeyThu, 15 Sep 2022 00:00:00 GMTFoundations of probability-raising causality in Markov decision processes
https://scholar.archive.org/work/r26527terngrxjjzp2yxljwkoy
This work introduces a novel cause-effect relation in Markov decision processes using the probability-raising principle. Initially, sets of states as causes and effects are considered, which is subsequently extended to regular path properties as effects and then as causes. The paper lays the mathematical foundations and analyzes the algorithmic properties of these cause-effect relations. This includes algorithms for checking cause conditions given an effect and deciding the existence of probability-raising causes. As the definition allows for sub-optimal coverage properties, quality measures for causes inspired by concepts of statistical analysis are studied. These include recall, coverage ratio and f-score. The computational complexity for finding optimal causes with respect to these measures is analyzed.Christel Baier, Jakob Piribauer, Robin Ziemekwork_r26527terngrxjjzp2yxljwkoyWed, 07 Sep 2022 00:00:00 GMTAlgorithms for the Structural Analysis of Multimode Modelica Models
https://scholar.archive.org/work/h5c2pkzuhne2xkjg3ff2qj2fvi
Since its 3.3 release, Modelica offers the possibility to specify models of dynamical systems with multiple modes having different DAE-based dynamics. However, the handling of such models by the current Modelica tools is not satisfactory, with mathematically sound models yielding exceptions at runtime. In this article, we propose several contributions to this multifaceted issue, namely: an efficient and scalable multimode extension of the structural analysis of Modelica models; a systematic way of rewriting a multimode Modelica model, based on this analysis, so that the rewritten model is guaranteed to be correctly compiled by state-of-the-art Modelica tools; a proposal for the handling of the consistent initialization of multimode models; multimode structural analysis algorithms that handle both multiple modes and mode change events in a unified framework, coupled with a compile-time algorithm for identifying and quantifying impulsive behaviors at mode changes. Our approach is illustrated on relevant example models, and the performance of our implementations is assessed on a variable dimension large-scale model.Albert Benveniste, Benoît Caillaud, Mathias Malandain, Joan Thibaultwork_h5c2pkzuhne2xkjg3ff2qj2fviThu, 01 Sep 2022 00:00:00 GMTKochen-Specker Contextuality
https://scholar.archive.org/work/vub2wqx3bnfvhfcecbkbdcbkku
A central result in the foundations of quantum mechanics is the Kochen-Specker theorem. In short, it states that quantum mechanics is in conflict with classical models in which the result of a measurement does not depend on which other compatible measurements are jointly performed. Here, compatible measurements are those that can be implemented simultaneously, or more generally, those who are jointly measurable. This conflict is generically called quantum contextuality. In this article, we present an introduction to this subject and its current status. We review several proofs of the Kochen-Specker theorem and different notions of contextuality. We explain how to experimentally test some of these notions and discuss connections between contextuality and nonlocality or graph theory. Finally, we review some applications of contextuality in quantum information processing.Costantino Budroni, Adán Cabello, Otfried Gühne, Matthias Kleinmann, Jan-Åke Larssonwork_vub2wqx3bnfvhfcecbkbdcbkkuWed, 31 Aug 2022 00:00:00 GMTGenerative Datalog with Continuous Distributions
https://scholar.archive.org/work/n7vwg4klwbbznfz4dylfvkbq7e
Arguing for the need to combine declarative and probabilistic programming, Bárány et al. (TODS 2017) recently introduced a probabilistic extension of Datalog as a "purely declarative probabilistic programming language." We revisit this language and propose a more principled approach towards defining its semantics based on stochastic kernels and Markov processes — standard notions from probability theory. This allows us to extend the semantics to continuous probability distributions, thereby settling an open problem posed by Bárány et al. We show that our semantics is fairly robust, allowing both parallel execution and arbitrary chase orders when evaluating a program. We cast our semantics in the framework of infinite probabilistic databases (Grohe and Lindner, LMCS 2022), and show that the semantics remains meaningful even when the input of a probabilistic Datalog program is an arbitrary probabilistic database.Martin Grohe, Benjamin Lucien Kaminski, Joost-Pieter Katoen, Peter Lindnerwork_n7vwg4klwbbznfz4dylfvkbq7eTue, 30 Aug 2022 00:00:00 GMTOn Feller continuity and full abstraction
https://scholar.archive.org/work/3mxaqxc32zdhxoo4vapqzpe3da
We study the nature of applicative bisimilarity in λ-calculi endowed with operators for sampling from contin- uous distributions. On the one hand, we show that bisimilarity, logical equivalence, and testing equivalence all coincide with contextual equivalence when real numbers can be manipulated through continuous functions only. The key ingredient towards this result is a notion of Feller-continuity for labelled Markov processes, which we believe of independent interest, giving rise a broad class of LMPs for which coinductive and logically inspired equivalences coincide. On the other hand, we show that if no constraint is put on the way real numbers are manipulated, characterizing contextual equivalence turns out to be hard, and most of the aforementioned notions of equivalence are even unsound.Gilles Barthe, Raphaëlle Crubillé, Ugo Dal Lago, Francesco Gavazzowork_3mxaqxc32zdhxoo4vapqzpe3daMon, 29 Aug 2022 00:00:00 GMTCompositional Active Inference II: Polynomial Dynamics. Approximate Inference Doctrines
https://scholar.archive.org/work/vivw4k5etvhrpeo75334fxiswm
We develop the compositional theory of active inference by introducing activity, functorially relating statistical games to the dynamical systems which play them, using the new notion of approximate inference doctrine. In order to exhibit such functors, we first develop the necessary theory of dynamical systems, using a generalization of the language of polynomial functors to supply compositional interfaces of the required types: with the resulting polynomially indexed categories of coalgebras, we construct monoidal bicategories of differential and dynamical "hierarchical inference systems", in which approximate inference doctrines have semantics. We then describe "externally parameterized" statistical games, and use them to construct two approximate inference doctrines found in the computational neuroscience literature, which we call the 'Laplace' and the 'Hebb-Laplace' doctrines: the former produces dynamical systems which optimize the posteriors of Gaussian models; and the latter produces systems which additionally optimize the parameters (or 'weights') which determine their predictions.Toby St. Clere Smithework_vivw4k5etvhrpeo75334fxiswmThu, 25 Aug 2022 00:00:00 GMTResource Bisimilarity in Petri Nets is Decidable
https://scholar.archive.org/work/3qts67w46rcfpmblppdn7ym2ta
Petri nets are a popular formalism for modeling and analyzing distributed systems. Tokens in Petri net models can represent the control flow state or resources produced/consumed by transition firings. We define a resource as a part (a submultiset) of Petri net markings and call two resources equivalent when replacing one of them with another in any marking does not change the observable Petri net behavior. We consider resource similarity and resource bisimilarity, two congruent restrictions of bisimulation equivalence on Petri net markings. Previously it was proved that resource similarity (the largest congruence included in bisimulation equivalence) is undecidable. Here we present an algorithm for checking resource bisimilarity, thereby proving that this relation (the largest congruence included in bisimulation equivalence that is a bisimulation) is decidable. We also give an example of two resources in a Petri net that are similar but not bisimilar.Irina Lomazova, Vladimir Bashkin, Petr Jančarwork_3qts67w46rcfpmblppdn7ym2taMon, 22 Aug 2022 00:00:00 GMTA coherent differential PCF
https://scholar.archive.org/work/vlgoeprapba73cr4nsk37jpm44
The categorical models of the differential lambda-calculus are additive categories because of the Leibniz rule which requires the summation of two expressions. This means that, as far as the differential lambda-calculus and differential linear logic are concerned, these models feature finite non-determinism and indeed these languages are essentially non-deterministic. In a previous paper we introduced a categorical framework for differentiation which does not require additivity and is compatible with deterministic models such as coherence spaces and probabilistic models such as probabilistic coherence spaces. Based on this semantics we develop a syntax of a deterministic version of the differential lambda-calculus. One nice feature of this new approach to differentiation is that it is compatible with general fixpoints of terms, so our language is actually a differential extension of PCF for which we provide a fully deterministic operational semantics.Thomas Ehrhardwork_vlgoeprapba73cr4nsk37jpm44Thu, 18 Aug 2022 00:00:00 GMTLiving Without Beth and Craig: Definitions and Interpolants in Description and Modal Logics with Nominals and Role Inclusions
https://scholar.archive.org/work/k3yhytv2oravdp3bmfiyi5trne
The Craig interpolation property (CIP) states that an interpolant for an implication exists iff it is valid. The projective Beth definability property (PBDP) states that an explicit definition exists iff a formula stating implicit definability is valid. Thus, the CIP and PBDP reduce potentially hard existence problems to entailment in the underlying logic. Description (and modal) logics with nominals and/or role inclusions do not enjoy the CIP nor the PBDP, but interpolants and explicit definitions have many applications, in particular in concept learning, ontology engineering, and ontology-based data management. In this article we show that, even without Beth and Craig, the existence of interpolants and explicit definitions is decidable in description logics with nominals and/or role inclusions such as ALCO, ALCH and ALCHOI and corresponding hybrid modal logics. However, living without Beth and Craig makes this problem harder than entailment: the existence problems become 2ExpTime-complete in the presence of an ontology or the universal modality, and coNExpTime-complete otherwise. We also analyze explicit definition existence if all symbols (except the one that is defined) are admitted in the definition. In this case the complexity depends on whether one considers individual or concept names. Finally, we consider the problem of computing interpolants and explicit definitions if they exist and turn the complexity upper bound proof into an algorithm computing them, at least for description logics with role inclusions.Alessandro Artale and Jean Christoph Jung and Andrea Mazzullo and Ana Ozaki and Frank Wolterwork_k3yhytv2oravdp3bmfiyi5trneWed, 17 Aug 2022 00:00:00 GMTQuasilinear-time Computation of Generic Modal Witnesses for Behavioural Inequivalence
https://scholar.archive.org/work/ye5b3kh5yrc7tdzazu5w4dzr7m
We provide a generic algorithm for constructing formulae that distinguish behaviourally inequivalent states in systems of various transition types such as nondeterministic, probabilistic or weighted; genericity over the transition type is achieved by working with coalgebras for a set functor in the paradigm of universal coalgebra. For every behavioural equivalence class in a given system, we construct a formula which holds precisely at the states in that class. The algorithm instantiates to deterministic finite automata, transition systems, labelled Markov chains, and systems of many other types. The ambient logic is a modal logic featuring modalities that are generically extracted from the functor; these modalities can be systematically translated into custom sets of modalities in a postprocessing step. The new algorithm builds on an existing coalgebraic partition refinement algorithm. It runs in time 𝒪((m+n) log n) on systems with n states and m transitions, and the same asymptotic bound applies to the dag size of the formulae it constructs. This improves the bounds on run time and formula size compared to previous algorithms even for previously known specific instances, viz. transition systems and Markov chains; in particular, the best previous bound for transition systems was 𝒪(m n).Thorsten Wißmann, Stefan Milius, Lutz Schröderwork_ye5b3kh5yrc7tdzazu5w4dzr7mMon, 15 Aug 2022 00:00:00 GMTA thread-safe Term Library
https://scholar.archive.org/work/s7q3ntgisfdz5lehkyb77qsjci
Terms are one of the fundamental mathematical concepts in computing. E.g. every expression characterisable by a context free grammar is a term. We developed a thread-safe Term Library. The biggest challenge is to implement hyper-efficient multi-reader/single-writer mutual exclusion for which we designed the new busy-forbidden protocol. Model checking is used to show both the correctness of the protocol and the Term Library. Benchmarks show this Term Library has little overhead compared to sequential versions and outperforms them already on two processors. Using the new library in an existing state space generation tool, very substantial speed ups can be obtained.J.F. Groote, M. Laveaux, P.H.M. van Spaendonckwork_s7q3ntgisfdz5lehkyb77qsjciFri, 12 Aug 2022 00:00:00 GMTSPARKs: Succinct Parallelizable Arguments of Knowledge
https://scholar.archive.org/work/zlg5vawcnvhwfpejbeg7girz5q
We introduce the notion of a Succinct Parallelizable Argument of Knowledge (SPARK). This is an argument of knowledge with the following three efficiency properties for computing and proving a (non-deterministic, polynomial time) parallel RAM computation that can be computed in parallel time T with at most p processors: — The prover's (parallel) running time is \(T + \mathrm{poly}\hspace{-2.0pt}\log (T \cdot p) \) . (In other words, the prover's running time is essentially T for large computation times!) — The prover uses at most \(p \cdot \mathrm{poly}\hspace{-2.0pt}\log (T \cdot p) \) processors. — The communication and verifier complexity are both \(\mathrm{poly}\hspace{-2.0pt}\log (T \cdot p) \) . The combination of all three is desirable as it gives a way to leverage a moderate increase in parallelism in favor of near-optimal running time. We emphasize that even a factor two overhead in the prover's parallel running time is not allowed. Our main contribution is a generic construction of SPARKs from any succinct argument of knowledge where the prover's parallel running time is \(T \cdot \mathrm{poly}\hspace{-2.0pt}\log (T \cdot p) \) when using p processors, assuming collision-resistant hash functions. When suitably instantiating our construction, we achieve a four-round SPARK for any parallel RAM computation assuming only collision resistance. Additionally assuming the existence of a succinct non-interactive argument of knowledge (SNARK), we construct a non-interactive SPARK that also preserves the space complexity of the underlying computation up to \(\mathrm{poly}\hspace{-2.0pt}\log (T\cdot p) \) factors. We also show the following applications of non-interactive SPARKs. First, they immediately imply delegation protocols with near optimal prover (parallel) running time. This, in turn, gives a way to construct verifiable delay functions (VDFs) from any sequential function. When the sequential function is also memory-hard, this yields the first construction of a memory-hard VDF.Naomi Ephraim, Cody Freitag, Ilan Komargodski, Rafael Passwork_zlg5vawcnvhwfpejbeg7girz5qWed, 10 Aug 2022 00:00:00 GMTMoss' logic for ordered coalgebras
https://scholar.archive.org/work/424byatauvagzo7pjd3zjkv2pi
We present a finitary version of Moss' coalgebraic logic for T-coalgebras, where T is a locally monotone endofunctor of the category of posets and monotone maps. The logic uses a single cover modality whose arity is given by the least finitary subfunctor of the dual of the coalgebra functor T_ω^∂, and the semantics of the modality is given by relation lifting. For the semantics to work, T is required to preserve exact squares. For the finitary setting to work, T_ω^∂ is required to preserve finite intersections. We develop a notion of a base for subobjects of T_ω X. This in particular allows us to talk about the finite poset of subformulas for a given formula. The notion of a base is introduced generally for a category equipped with a suitable factorisation system. We prove that the resulting logic has the Hennessy-Milner property for the notion of similarity based on the notion of relation lifting. We define a sequent proof system for the logic, and prove its completeness.Marta Bílková, Matěj Dostálwork_424byatauvagzo7pjd3zjkv2piSat, 06 Aug 2022 00:00:00 GMTLIPIcs, Volume 238, DNA 28, Complete Volume
https://scholar.archive.org/work/627o3xn4vbbgpdox5dwolgcuny
LIPIcs, Volume 238, DNA 28, Complete VolumeThomas E. Ouldridge, Shelley F. J. Wickhamwork_627o3xn4vbbgpdox5dwolgcunyThu, 04 Aug 2022 00:00:00 GMTThe Zoo of Lambda-Calculus Reduction Strategies, And Coq
https://scholar.archive.org/work/un3ebtmdjndf3ftbrrsf5g7b5e
We present a generic framework for the specification and reasoning about reduction strategies in the lambda calculus, representable as sets of term decompositions. It is provided as a Coq formalization that features a novel format of phased strategies. It facilitates concise description and algebraic reasoning about properties of reduction strategies. The formalization accommodates many well-known strategies, both weak and strong, such as call by name, call by value, head reduction, normal order, full β-reduction, etc. We illustrate the use of the framework as a tool to inspect and categorize the "zoo" of existing strategies, as well as to discover and study new ones with particular properties.Małgorzata Biernacka, Witold Charatonik, Tomasz Drab, June Andronick, Leonardo de Mourawork_un3ebtmdjndf3ftbrrsf5g7b5eWed, 03 Aug 2022 00:00:00 GMTGraded Monads and Behavioural Equivalence Games
https://scholar.archive.org/work/5vvsxffo3nelfjcksilzdw2xha
The framework of graded semantics uses graded monads to capture behavioural equivalences of varying granularity, for example as found in the linear-time / branching-time spectrum, over general system types. We describe a generic Spoiler-Duplicator game for graded semantics that is extracted from the given graded monad, and may be seen as playing out an equational proof; instances include standard pebble games for simulation and bisimulation as well as games for trace-like equivalences and coalgebraic behavioural equivalence. Considerations on an infinite variant of such games lead to a novel notion of infinite-depth graded semantics. Under reasonable restrictions, the infinite-depth graded semantics associated to a given graded equivalence can be characterized in terms of a determinization construction for coalgebras under the equivalence at hand.Chase Ford, Stefan Milius, Lutz Schröder, Harsh Beohar, Barbara Königwork_5vvsxffo3nelfjcksilzdw2xhaTue, 02 Aug 2022 00:00:00 GMT