IA Scholar Query: Local Search Techniques for Disjunctive Logic Programs.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 29 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440An integrated geological-geophysical approach to subsurface interface reconstruction of muon tomography measurements in high alpine regions
https://scholar.archive.org/work/ricux5erhrhe7dhkcevl5l3iuy
Muon tomography is an imaging technique that emerged in the last decades. The principal concept is similar to X-ray tomography, where one determines the spatial distribution of material densities by means of penetrating photons. It differs from this well-known technology only by the type of particle. Muons are continuously produced in the Earth's atmosphere when primary cosmic rays (mostly protons) interact with the atmosphere's molecules. Depending on their energies these muons can penetrate materials up to several hundreds of metres (or even kilometres). Consequently, they have been used for the imaging of larger objects, including large geological objects such as volcanoes, caves and fault systems. This research project aimed at applying this technology to an alpine glacier in Central Switzerland to determine its bedrock geometry, and if possible, to gain information on the bedrock erosion mechanism. To this end, two major experimental studies have been conducted with the aim to reconstruct bedrock geometries of two different glaciers. Given this framework, I present in this thesis my contribution to the project in which I worked for 5 years. Most of the technological know-how of muon tomography still lies within physics institutes who were the key drivers in the development of this method. As the geophysical/geological community is nowadays an important user of this technology, it is important that also non-physicists familiarise themselves with the theory and concepts behind muon tomography. This can be seen as an effective method to bring more geoscientists to utilize this new technology for their own research. The first part of this thesis is designed to tackle this problem with a review article on the principles of muon tomography and a guide to best practice. A second important aspect is the reconstruction of the bedrock topography given muon flux measurements at various locations. Many to-date reconstruction algorithms include supplementary geological information such as density and/or compositional me [...]Alessandro Diego Lechmannwork_ricux5erhrhe7dhkcevl5l3iuyThu, 29 Sep 2022 00:00:00 GMTHigh-resolution analysis of individual Drosophila melanogaster larvae within groups uncovers inter- and intra-individual variability in locomotion and its neurogenetic modulation
https://scholar.archive.org/work/3n3he6kak5elllxa6dsbju4imm
Neuronally orchestrated muscular movement and locomotion are defining faculties of multicellular animals. Due to its numerically simple brain and neuromuscular system and its genetic accessibility, the larva of the fruit fly Drosophila melanogaster is an established model to study these processes at tractable levels of complexity. However, although the faculty of locomotion clearly pertains to the individual animal, present studies of locomotion in larval Drosophila mostly use group assays and measurements aggregated across individual animals. The alternative is to measure animals one at a time, an extravagance for larger-scale analyses. In principle or in practice, this in particular rules out grasping the inter- and intra-individual variability in locomotion and its genetic and neuronal determinants. Here we present the IMBA (Individual Maggot Behaviour Analyser) for tracking and analysing the behaviour of individual larvae within groups. Using a combination of computational modelling and statistical approaches, the IMBA reliably resolves individual identity across collisions. It does not require specific hardware and can therefore be used in non-expert labs. We take advantage of the IMBA first to systematically describe the inter- and intra-individual variability in free, unconstrained locomotion in wild-type animals. We then report the discovery of a novel, complex locomotion phenotype of a mutant lacking an adhesion-type GPCR. The IMBA further allows us to determine, at the level of individual animals, the modulation of locomotion across repeated activations of dopamine neurons. Strikingly, IMBA can also be used to analyse 'silly walks', that is patterns of locomotion it was not originally designed to investigate. This is shown for the transient backward locomotion induced by brief optogenetic activation of the brain-descending 'mooncrawler' neurons, and the variability in this behaviour. Thus, the IMBA is an easy-to-use toolbox allowing an unprecedentedly rich view of the behaviour and behavioural variability of individual Drosophila larvae, with utility in multiple biomedical research contexts.Michael Thane, Emmanouil Paisios, Torsten Stöter, Anna-Rosa Krüger, Sebastian Gläß, Anne-Kristin Dahse, Nicole Scholz, Bertram Gerber, Dirk J Lehmann, Michael Schleyerwork_3n3he6kak5elllxa6dsbju4immWed, 28 Sep 2022 00:00:00 GMTMathematical Components
https://scholar.archive.org/work/ahuebtxoqbcrbebz5rb2ulla4q
Mathematical Components is the name of a library of formalized mathematics for the Coq system. It covers a variety of topics, from the theory of basic data structures (e.g., numbers, lists, finite sets) to advanced results in various flavors of algebra. This library constitutes the infrastructure for the machine-checked proofs of the Four Color Theorem and of the Odd Order Theorem. The reason of existence of this book is to break down the barriers to entry. While there are several books around covering the usage of the Coq system and the theory it is based on, the Mathematical Components library is built in an unconventional way. As a consequence, this book provides a non-standard presentation of Coq, putting upfront the formalization choices and the proof style that are the pillars of the library. This books targets two classes of public. On the one hand, newcomers, even the more mathematically inclined ones, find a soft introduction to the programming language of Coq, Gallina, and the SSReflect proof language. On the other hand accustomed Coq users find a substantial account of the formalization style that made the Mathematical Components library possible.Assia Mahboubi, Enrico Tassiwork_ahuebtxoqbcrbebz5rb2ulla4qWed, 28 Sep 2022 00:00:00 GMTEmbedding Hindsight Reasoning in Separation Logic
https://scholar.archive.org/work/7llk2wfmkfekzegg5iqmfsfp2i
Proving linearizability of concurrent data structures remains a key challenge for verification. We present temporal interpolation as a new proof principle to conduct such proofs using hindsight arguments within concurrent separation logic. Temporal reasoning offers an easy-to-use alternative to prophecy variables and has the advantage of structuring proofs into easy-to-discharge hypotheses. To hindsight theory, our work brings the formal rigor and proof machinery of concurrent program logics. We substantiate the usefulness of our development by verifying the linearizability of the Logical Ordering (LO-)tree and RDCSS. Both of these involve complex proof arguments due to future-dependent linearization points. The LO-tree additionally features complex structure overlays. Our proof of the LO-tree is the first formal proof of this data structure. Interestingly, our formalization revealed an unknown bug and an existing informal proof as erroneous.Roland Meyer, Thomas Wies, Sebastian Wolffwork_7llk2wfmkfekzegg5iqmfsfp2iTue, 27 Sep 2022 00:00:00 GMTDeepSym: Deep Symbol Generation and Rule Learning from Unsupervised Continuous Robot Interaction for Planning
https://scholar.archive.org/work/36zvopzuxrbvrcnqqioceqqc34
We propose a novel general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them for non-trivial action planning. Our robot interacts with objects using an initial action repertoire that is assumed to be acquired earlier and observes the effects it can create in the environment. To form action-grounded object, effect, and relational categories, we employ a binary bottleneck layer in a predictive, deep encoder-decoder network that takes the image of the scene and the action applied as input, and generates the resulting effects in the scene in pixel coordinates. After learning, the binary latent vector represents action-driven object categories based on the interaction experience of the robot. To distill the knowledge represented by the neural network into rules useful for symbolic reasoning, a decision tree is trained to reproduce its decoder function. Probabilistic rules are extracted from the decision paths of the tree and are represented in the Probabilistic Planning Domain Definition Language (PPDDL), allowing off-the-shelf planners to operate on the knowledge extracted from the sensorimotor experience of the robot. The deployment of the proposed approach for a simulated robotic manipulator enabled the discovery of discrete representations of object properties such as 'rollable' and 'insertable'. In turn, the use of these representations as symbols allowed the generation of effective plans for achieving goals, such as building towers of the desired height, demonstrating the effectiveness of the approach for multi-step object manipulation. Finally, we demonstrate that the system is not only restricted to the robotics domain by assessing its applicability to the MNIST 8-puzzle domain in which learned symbols allow for the generation of plans that move the empty tile into any given position.Alper Ahmetoglu, M. Yunus Seker, Justus Piater, Erhan Oztop, Emre Ugurwork_36zvopzuxrbvrcnqqioceqqc34Tue, 27 Sep 2022 00:00:00 GMTFull-Program Induction: Verifying Array Programs sans Loop Invariants
https://scholar.archive.org/work/i7taujl565bg5jj4qyvata7zx4
Arrays are commonly used in a variety of software to store and process data in loops. Automatically proving safety properties of such programs that manipulate arrays is challenging. We present a novel verification technique, called full-program induction, for proving (a sub-class of) quantified as well as quantifier-free properties of programs manipulating arrays of parametric size N. Instead of inducting over individual loops, our technique inducts over the entire program (possibly containing multiple loops) directly via the program parameter N. The technique performs non-trivial transformations of the given program and pre-conditions during the inductive step. The transformations assist in effectively reducing the assertion checking problem by transforming a program with multiple loops to a program which has fewer and simpler loops or is loop-free. Significantly, full-program induction does not require generation or use of loop-specific invariants. To assess the efficacy of our technique, we have developed a prototype tool called Vajra. We demonstrate the performance of Vajra vis-a-vis several state-of-the-art tools on a large set of array manipulating benchmarks from the international software verification competition (SV-COMP) and on several programs inspired by algebraic functions that perform polynomial computations.Supratik Chakraborty, Ashutosh Gupta, Divyesh Unadkatwork_i7taujl565bg5jj4qyvata7zx4Mon, 26 Sep 2022 00:00:00 GMTA First-Order Logic with Frames
https://scholar.archive.org/work/uodw72rxybgelnlvndtfwszvvy
We propose a novel logic, called Frame Logic (FL), that extends first-order logic (with recursive definitions) using a construct Sp(.) that captures the implicit supports of formulas -- the precise subset of the universe upon which their meaning depends. Using such supports, we formulate proof rules that facilitate frame reasoning elegantly when the underlying model undergoes change. We show that the logic is expressive by capturing several data-structures and also exhibit a translation from a precise fragment of separation logic to frame logic. Finally, we design a program logic based on frame logic for reasoning with programs that dynamically update heaps that facilitates local specifications and frame reasoning. This program logic consists of both localized proof rules as well as rules that derive the weakest tightest preconditions in FL.Adithya Murali, Lucas Peña, Christof Löding, P. Madhusudanwork_uodw72rxybgelnlvndtfwszvvyMon, 26 Sep 2022 00:00:00 GMTOptimal Job Scheduling and Bandwidth Augmentation in Hybrid Data Center Networks
https://scholar.archive.org/work/ewa5pyah7ffsrhs47n6tgpx4su
Optimizing data transfers is critical for improving job performance in data-parallel frameworks. In the hybrid data center with both wired and wireless links, reconfigurable wireless links can provide additional bandwidth to speed up job execution. However, it requires the scheduler and transceivers to make joint decisions under coupled constraints. In this work, we identify that the joint job scheduling and bandwidth augmentation problem is a complex mixed integer nonlinear problem, which is not solvable by existing optimization methods. To address this bottleneck, we transform it into an equivalent problem based on the coupling of its heuristic bounds, the revised data transfer representation and non-linear constraints decoupling and reformulation, such that the optimal solution can be efficiently acquired by the Branch and Bound method. Based on the proposed method, the performance of job scheduling with and without bandwidth augmentation is studied. Experiments show that the performance gain depends on multiple factors, especially the data size. Compared with existing solutions, our method can averagely reduce the job completion time by up to 10% under the setting of production scenario.Binquan Guo, Zhou Zhang, Ye Yan, Hongyan Liwork_ewa5pyah7ffsrhs47n6tgpx4suFri, 23 Sep 2022 00:00:00 GMTComputing solution space properties of combinatorial optimization problems via generic tensor networks
https://scholar.archive.org/work/xee5elvwfjdvlmht37lzrjqc7m
We introduce a unified framework to compute the solution space properties of a broad class of combinatorial optimization problems. These properties include finding one of the optimum solutions, counting the number of solutions of a given size, and enumeration and sampling of solutions of a given size. Using the independent set problem as an example, we show how all these solution space properties can be computed in the unified approach of generic tensor networks. We demonstrate the versatility of this computational tool by applying it to several examples, including computing the entropy constant for hardcore lattice gases, studying the overlap gap properties, and analyzing the performance of quantum and classical algorithms for finding maximum independent sets.Jin-Guo Liu, Xun Gao, Madelyn Cain, Mikhail D. Lukin, Sheng-Tao Wangwork_xee5elvwfjdvlmht37lzrjqc7mFri, 23 Sep 2022 00:00:00 GMTCombinatorial optimization and reasoning with graph neural networks
https://scholar.archive.org/work/dszclpgdgfgzrnd562tfbceni4
Combinatorial optimization is a well-established area in operations research and computer science. Until recently, its methods have focused on solving problem instances in isolation, ignoring that they often stem from related data distributions in practice. However, recent years have seen a surge of interest in using machine learning, especially graph neural networks (GNNs), as a key building block for combinatorial tasks, either directly as solvers or by enhancing exact solvers. The inductive bias of GNNs effectively encodes combinatorial and relational input due to their invariance to permutations and awareness of input sparsity. This paper presents a conceptual review of recent key advancements in this emerging field, aiming at optimization and machine learning researchers.Quentin Cappart, Didier Chételat, Elias Khalil, Andrea Lodi, Christopher Morris, Petar Veličkovićwork_dszclpgdgfgzrnd562tfbceni4Fri, 23 Sep 2022 00:00:00 GMTOptimization with Constraint Learning: A Framework and Survey
https://scholar.archive.org/work/bg52kjeirvd57kmetymsfpkv7i
Many real-life optimization problems frequently contain one or more constraints or objectives for which there are no explicit formulas. If data is however available, these data can be used to learn the constraints. The benefits of this approach are clearly seen, however there is a need for this process to be carried out in a structured manner. This paper therefore provides a framework for Optimization with Constraint Learning (OCL) which we believe will help to formalize and direct the process of learning constraints from data. This framework includes the following steps: (i) setup of the conceptual optimization model, (ii) data gathering and preprocessing, (iii) selection and training of predictive models, (iv) resolution of the optimization model, and (v) verification and improvement of the optimization model. We then review the recent OCL literature in light of this framework, and highlight current trends, as well as areas for future research.Adejuyigbe Fajemisin, Donato Maragno, Dick den Hertogwork_bg52kjeirvd57kmetymsfpkv7iThu, 22 Sep 2022 00:00:00 GMTConstrained Local Search for Last-Mile Routing
https://scholar.archive.org/work/26kzttk425frxideeavmrwjraq
Last-mile routing refers to the final step in a supply chain, delivering packages from a depot station to the homes of customers. At the level of a single van driver, the task is a traveling salesman problem. But the choice of route may be constrained by warehouse sorting operations, van-loading processes, driver preferences, and other considerations, rather than a straightforward minimization of tour length. We propose a simple and efficient penalty-based local-search algorithm for route optimization in the presence of such constraints, adopting a technique developed by Helsgaun to extend the LKH traveling salesman problem code to general vehicle-routing models. We apply his technique to handle combinations of constraints obtained from an analysis of historical routing data, enforcing properties that are desired in high-quality solutions. Our code is available under the open-source MIT license. An earlier version of the code received the 100,000 top prize in the Amazon Last Mile Routing Research Challenge organized in 2021.William Cook, Stephan Held, Keld Helsgaunwork_26kzttk425frxideeavmrwjraqThu, 22 Sep 2022 00:00:00 GMTNeural Lyapunov Control
https://scholar.archive.org/work/wd57oqba75eipmjiebo2g6zrie
We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability. The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions. The procedure terminates when no counterexample is found by the falsifier, in which case the controlled nonlinear system is provably stable. The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality solutions for challenging control problems.Ya-Chien Chang, Nima Roohi, Sicun Gaowork_wd57oqba75eipmjiebo2g6zrieThu, 22 Sep 2022 00:00:00 GMTTowards Faithful Model Explanation in NLP: A Survey
https://scholar.archive.org/work/7iayjuiybbb6hgq5yxefmv6yju
End-to-end neural NLP architectures are notoriously difficult to understand, which gives rise to numerous efforts towards model explainability in recent years. An essential principle of model explanation is Faithfulness, i.e., an explanation should accurately represent the reasoning process behind the model's prediction. This survey first discusses the definition and evaluation of Faithfulness, as well as its significance for explainability. We then introduce the recent advances in faithful explanation by grouping approaches into five categories: similarity methods, analysis of model-internal structures, backpropagation-based methods, counterfactual intervention, and self-explanatory models. Each category will be illustrated with its representative studies, advantages, and shortcomings. Finally, we discuss all the above methods in terms of their common virtues and limitations, and reflect on future work directions towards faithful explainability. For researchers interested in studying interpretability, this survey will offer an accessible and comprehensive overview of the area, laying the basis for further exploration. For users hoping to better understand their own models, this survey will be an introductory manual helping with choosing the most suitable explanation method(s).Qing Lyu, Marianna Apidianaki, Chris Callison-Burchwork_7iayjuiybbb6hgq5yxefmv6yjuThu, 22 Sep 2022 00:00:00 GMTMotion Planning Under Uncertainty with Complex Agents and Environments via Hybrid Search
https://scholar.archive.org/work/t3x4qanyezeirfutoiqslvs6h4
As autonomous systems and robots are applied to more real world situations, they must reason about uncertainty when planning actions. Mission success oftentimes cannot be guaranteed and the planner must reason about the probability of failure. Unfortunately, computing a trajectory that satisfies mission goals while constraining the probability of failure is difficult because of the need to reason about complex, multidimensional probability distributions. Recent methods have seen success using chance-constrained, model-based planning. However, the majority of these methods can only handle simple environment and agent models. We argue that there are two main drawbacks of current approaches to goal-directed motion planning under uncertainty. First, current methods suffer from an inability to deal with expressive environment models such as 3D non-convex obstacles. Second, most planners rely on considerable simplifications when computing trajectory risk including approximating the agent's dynamics, geometry, and uncertainty. In this article, we apply hybrid search to the risk-bound, goal-directed planning problem. The hybrid search consists of a region planner and a trajectory planner. The region planner makes discrete choices by reasoning about geometric regions that the autonomous agent should visit in order to accomplish its mission. In formulating the region planner, we propose landmark regions that help produce obstacle-free paths. The region planner passes paths through the environment to a trajectory planner; the task of the trajectory planner is to optimize trajectories that respect the agent's dynamics and the user's desired risk of mission failure. We discuss three approaches to modeling trajectory risk: a CDF-based approach, a sampling-based collocation method, and an algorithm named Shooting Method Monte Carlo. These models allow computation of trajectory risk with more complex environments, agent dynamics, geometries, and models of uncertainty than past approaches. A variety of 2D and 3D test cases are presented including a linear case, a Dubins car model, and an underwater autonomous vehicle. The method is shown to outperform other methods in terms of speed and utility of the solution. Additionally, the models of trajectory risk are shown to better approximate risk in simulation.Daniel Strawser, Brian Williamswork_t3x4qanyezeirfutoiqslvs6h4Mon, 19 Sep 2022 00:00:00 GMTS2TD: a Separation Logic Verifier that Supports Reasoning of the Absence and Presence of Bugs
https://scholar.archive.org/work/wb7ptiwhfbbmhcfi4ugmnpksle
Heap-manipulating programs are known to be challenging to reason about. We present a novel verifier for heap-manipulating programs called S2TD, which encodes programs systematically in the form of Constrained Horn Clauses (CHC) using a novel extension of separation logic (SL) with recursive predicates and dangling predicates. S2TD actively explores cyclic proofs to address the path explosion problem. S2TD differentiates itself from existing CHC-based verifiers by focusing on heap-manipulating programs and employing cyclic proof to efficiently verify or falsify them with counterexamples. Compared with existing SL-based verifiers, S2TD precisely specifies the heaps of de-allocated pointers to avoid false positives in reasoning about the presence of bugs. S2TD has been evaluated using a comprehensive set of benchmark programs from the SV-COMP repository. The results show that S2TD is more effective than state-of-art program verifiers and is more efficient than most of them.Quang Loc Le, Jun Sun, Long H. Pham, Shengchao Qinwork_wb7ptiwhfbbmhcfi4ugmnpksleMon, 19 Sep 2022 00:00:00 GMTCyber Threats to Smart Grids: Review, Taxonomy, Potential Solutions, and Future Directions
https://scholar.archive.org/work/r6inw57mrzanjp2hohn746ugxm
Smart Grids (SGs) are governed by advanced computing, control technologies, and networking infrastructure. However, compromised cybersecurity of the smart grid not only affects the security of existing energy systems but also directly impacts national security. The increasing number of cyberattacks against the smart grid urgently necessitates more robust security protection technologies to maintain the security of the grid system and its operations. The purpose of this review paper is to provide a thorough understanding of the incumbent cyberattacks' influence on the entire smart grid ecosystem. In this paper, we review the various threats in the smart grid, which have two core domains: the intrinsic vulnerability of the system and the external cyberattacks. Similarly, we analyze the vulnerabilities of all components of the smart grid (hardware, software, and data communication), data management, services and applications, running environment, and evolving and complex smart grids. A structured smart grid architecture and global smart grid cyberattacks with their impact from 2010 to July 2022 are presented. Then, we investigated the the thematic taxonomy of cyberattacks on smart grids to highlight the attack strategies, consequences, and related studies analyzed. In addition, potential cybersecurity solutions to smart grids are explained in the context of the implementation of blockchain and Artificial Intelligence (AI) techniques. Finally, technical future directions based on the analysis are provided against cyberattacks on SGs.Jianguo Ding, Attia Qammar, Zhimin Zhang, Ahmad Karim, Huansheng Ningwork_r6inw57mrzanjp2hohn746ugxmSat, 17 Sep 2022 00:00:00 GMTSymbolic Execution for Randomized Programs
https://scholar.archive.org/work/3bqd2dwsqrfm7ijqhmzzatn6vu
We propose a symbolic execution method for programs that can draw random samples. In contrast to existing work, our method can verify randomized programs with unknown inputs and can prove probabilistic properties that universally quantify over all possible inputs. Our technique augments standard symbolic execution with a new class of probabilistic symbolic variables, which represent the results of random draws, and computes symbolic expressions representing the probability of taking individual paths. We implement our method on top of the KLEE symbolic execution engine alongside multiple optimizations and use it to prove properties about probabilities and expected values for a range of challenging case studies written in C++, including Freivalds' algorithm, randomized quicksort, and a randomized property-testing algorithm for monotonicity. We evaluate our method against Psi, an exact probabilistic symbolic inference engine, and Storm, a probabilistic model checker, and show that our method significantly outperforms both tools.Zachary Susag, Sumit Lahiri, Justin Hsu, Subhajit Roywork_3bqd2dwsqrfm7ijqhmzzatn6vuFri, 16 Sep 2022 00:00:00 GMTA processual perspective on whole-class-scaffolding in business education
https://scholar.archive.org/work/me2hew5zrvf6por3jpfqgubsqu
Context: Scaffolding is a form of process-adaptive learning support that is relevant in numerous contexts, including informal learning, workplace learning as well as school teaching. While scaffolding can be well conceptualised for individual learning situations (especially for tutoring situations), there is a difficulty in measuring process adaptivity in heterogeneous learning groups, such as school classes. Approach: In this paper, we develop a measurement method that targets the deep structure of teaching and learning in whole class settings. Processes of shared knowledge constructions are taken into account, since whole-class-scaffolding (WCS) means to shape and develop common or joint knowledge spaces rather than to scaffold a multitude of individual construction processes at the same time. To achieve a coding procedure for WCS interactions, we integrate scaffolding principles and principles of dialogic teaching and explicated a set of rules that can be correlated to the quality of WCS-episodes rated on distinct Likert scales. Results: The measurement method developed in the paper provides a solution to the problem of how to measure process-adaptive learning support that is not only related to individual learners, but is directed at a heterogeneous group of learners in which different support needs may be present simultaneously. The coding procedure systematically links scaffolding principles and principles of dialogic teaching and enables us to capture the dynamics of teaching and learning processes in larger group settings. In this respect, concepts such as joint- and common space, representing entities to which WCS refers, are operationalised. Conclusions: When methods for measuring the dynamics of teaching and learning processes are available, research on instructional support is no longer limited to global ratings of whole learning units. Furthermore, the codings allow for a more fine-grained analysis of trajectories of scaffolding interactions. Such an analysis reveals information about local specifics [...]Rico Hermkes, Gerhard Minnameier, Manon Heuer-Kinscherwork_me2hew5zrvf6por3jpfqgubsquFri, 16 Sep 2022 00:00:00 GMTProving Hypersafety Compositionally
https://scholar.archive.org/work/mis3e6oznzgmve37akr4bwlv3e
Hypersafety properties of arity n are program properties that relate n traces of a program (or, more generally, traces of n programs). Classic examples include determinism, idempotence, and associativity. A number of relational program logics have been introduced to target this class of properties. Their aim is to construct simpler proofs by capitalizing on structural similarities between the n related programs. We propose an unexplored, complementary proof principle that establishes hyper-triples (i.e. hypersafety judgments) as a unifying compositional building block for proofs, and we use it to develop a Logic for Hyper-triple Composition (LHC), which supports forms of proof compositionality that were not achievable in previous logics. We prove LHC sound and apply it to a number of challenging examples.Emanuele D'Osualdo, Azadeh Farzan, Derek Dreyerwork_mis3e6oznzgmve37akr4bwlv3eThu, 15 Sep 2022 00:00:00 GMT