IA Scholar Query: An Algebraic Semantics of Notional Entailment Logic Cn.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgWed, 28 Sep 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440A Tutorial Introduction to Lattice-based Cryptography and Homomorphic Encryption
https://scholar.archive.org/work/vlqa6rnsa5d3vnpa3qeaizot6a
Why study Lattice-based Cryptography? There are a few ways to answer this question. 1. It is useful to have cryptosystems that are based on a variety of hard computational problems so the different cryptosystems are not all vulnerable in the same way. 2. The computational aspects of lattice-based cryptosystem are usually simple to understand and fairly easy to implement in practice. 3. Lattice-based cryptosystems have lower encryption/decryption computational complexities compared to popular cryptosystems that are based on the integer factorisation or the discrete logarithm problems. 4. Lattice-based cryptosystems enjoy strong worst-case hardness security proofs based on approximate versions of known NP-hard lattice problems. 5. Lattice-based cryptosystems are believed to be good candidates for post-quantum cryptography, since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best-known classical (non-quantum) algorithms, unlike for integer factorisation and (elliptic curve) discrete logarithm problems. 6. Last but not least, interesting structures in lattice problems have led to significant advances in Homomorphic Encryption, a new research area with wide-ranging applications.Yang Li, Kee Siong Ng, Michael Purcellwork_vlqa6rnsa5d3vnpa3qeaizot6aWed, 28 Sep 2022 00:00:00 GMTArtificial Intelligence and Advanced Materials
https://scholar.archive.org/work/tkf566mg6zf77a7xan6anloxvu
Artificial intelligence is gaining strength and materials science can both contribute to and profit from it. In a simultaneous progress race, new materials, systems and processes can be devised and optimized thanks to machine learning techniques and such progress can be turned into in-novative computing platforms. Future materials scientists will profit from understanding how machine learning can boost the conception of advanced materials. This review covers aspects of computation from the fundamentals to directions taken and repercussions produced by compu-tation to account for the origins, procedures and applications of artificial intelligence. Machine learning and its methods are reviewed to provide basic knowledge on its implementation and its potential. The materials and systems used to implement artificial intelligence with electric charges are finding serious competition from other information carrying and processing agents. The impact these techniques are having on the inception of new advanced materials is so deep that a new paradigm is developing where implicit knowledge is being mined to conceive materi-als and systems for functions instead of finding applications to found materials. How far this trend can be carried is hard to fathom as exemplified by the power to discover unheard of mate-rials or physical laws buried in data.Cefe Lópezwork_tkf566mg6zf77a7xan6anloxvuWed, 28 Sep 2022 00:00:00 GMTOn six-valued logics of evidence and truth expanding Belnap-Dunn four-valued logic
https://scholar.archive.org/work/lczmyvgdnrewfpp7wp3axfhnt4
The main aim of this paper is to introduce the logics of evidence and truth LETK+ and LETF+ together with sound, complete, and decidable 6-valued deterministic semantics for them. These logics extend the logics LETK and LETF- with rules of propagation of classicality, which are inferences that express how the classicality operator o is transmitted from less complex to more complex sentences, and vice-versa. The 6-valued semantics here proposed extends the 4 values of Belnap-Dunn logic with 2 more values that intend to represent (positive and negative) reliable information. A 6-valued non-deterministic for LETK is obtained by means of Nmatrices based on swap structures, and the 6-valued semantics for LETK+ is then obtained by imposing restrictions on the semantics of LETK, and these restrictions correspond exactly to the rules of propagation of classicality that extend LETK. The logic LETF+ is obtained as the implication-free fragment of LETK+. We also show that the 6 values of LETK+ and LETF+ define a lattice structure that extends the lattice L4 defined by the Belnap-Dunn 4-valued logic with the 2 more values mentioned above, intuitively interpreted as positive and negative reliable information. Finally we also show that LETK+ is Blok-Pigozzi algebraizable and that its implication-free fragment LETF+ coincides with the degree-preserving logic of the involutive Stone algebras.Marcelo E. Coniglio, Abilio Rodrigueswork_lczmyvgdnrewfpp7wp3axfhnt4Sun, 25 Sep 2022 00:00:00 GMTStochastic Mathematical Systems
https://scholar.archive.org/work/ieqk2ruqljh2rms2yeoz5ls7qy
We introduce a framework that can be used to model both mathematics and human reasoning about mathematics. This framework involves stochastic mathematical systems (SMSs), which are stochastic processes that generate pairs of questions and associated answers (with no explicit referents). We use the SMS framework to define normative conditions for mathematical reasoning, by defining a "calibration" relation between a pair of SMSs. The first SMS is the human reasoner, and the second is an "oracle" SMS that can be interpreted as deciding whether the question-answer pairs of the reasoner SMS are valid. To ground thinking, we understand the answers to questions given by this oracle to be the answers that would be given by an SMS representing the entire mathematical community in the infinite long run of the process of asking and answering questions. We then introduce a slight extension of SMSs to allow us to model both the physical universe and human reasoning about the physical universe. We then define a slightly different calibration relation appropriate for the case of scientific reasoning. In this case the first SMS represents a human scientist predicting the outcome of future experiments, while the second SMS represents the physical universe in which the scientist is embedded, with the question-answer pairs of that SMS being specifications of the experiments that will occur and the outcome of those experiments, respectively. Next we derive conditions justifying two important patterns of inference in both mathematical and scientific reasoning: i) the practice of increasing one's degree of belief in a claim as one observes increasingly many lines of evidence for that claim, and ii) abduction, the practice of inferring a claim's probability of being correct from its explanatory power with respect to some other claim that is already taken to hold for independent reasonsDavid H. Wolpert, David B. Kinneywork_ieqk2ruqljh2rms2yeoz5ls7qyThu, 01 Sep 2022 00:00:00 GMTConic Idempotent Residuated Lattices
https://scholar.archive.org/work/oetla4aaifgfrnufuwoyuyy7r4
We give a structural decomposition of conic idempotent residuated lattices, showing that each of them is an ordinal sum of certain simpler partially ordered structures. This ordinal sum is indexed by a totally ordered residuated lattice, which serves as its skeleton and is both a subalgebra and nuclear image, and we equationally characterize which totally ordered residuated lattices appear as such skeletons. Using the two inverse operations induced by the residuals, we further characterize both congruence and subalgebra generation in conic idempotent residuated lattices. We show that every variety generated by conic idempotent residuated lattices enjoys the congruence extension property. In particular, this holds for semilinear idempotent residuated lattices. Moreover, we provide a detailed analysis of the structure of idempotent residuated chains serving as index sets on two levels: as certain enriched Galois connections and as enhanced monoidal preorders. Using this, we show that although conic idempotent residuated lattices do not enjoy the amalgamation property, the natural class of rigid and conjunctive conic idempotent residuated lattices has the strong amalgamation property, and consequently has surjective epimorphisms. We extend this result to the variety generated by rigid and conjunctive conic idempotent residuated lattices, and establish the amalgamation, strong amalgamation, and epimorphism-surjectivity properties for several important subvarieties. Based on this algebraic work, we obtain local deduction theorems, the deductive interpolation property, and the projective Beth definability property for the corresponding substructural logics.Wesley Fussner, Nick Galatoswork_oetla4aaifgfrnufuwoyuyy7r4Sat, 20 Aug 2022 00:00:00 GMTSemantics and canonicalisation of SPARQL 1.1
https://scholar.archive.org/work/dbqwzwsi5bca3eil4gj6e6j4au
We define a procedure for canonicalising SPARQL 1.1 queries. Specifically, given two input queries that return the same solutions modulo variable names over any RDF graph (which we call congruent queries), the canonicalisation procedure aims to rewrite both input queries to a syntactically canonical query that likewise returns the same results modulo variable renaming. The use-cases for such canonicalisation include caching, optimisation, redundancy elimination, question answering, and more besides. To begin, we formally define the semantics of the SPARQL 1.1 language, including features often overlooked in the literature. We then propose a canonicalisation procedure based on mapping a SPARQL query to an RDF graph, applying algebraic rewritings, removing redundancy, and then using canonical labelling techniques to produce a canonical form. Unfortunately a full canonicalisation procedure for SPARQL 1.1 queries would be undecidable. We rather propose a procedure that we prove to be sound and complete for a decidable fragment of monotone queries under both set and bag semantics, and that is sound but incomplete in the case of the full SPARQL 1.1 query language. Although the worst case of the procedure is super-exponential, our experiments show that it is efficient for real-world queries, and that such difficult cases are rare.Jaime Salas, Aidan Hogan, Guilin Qiwork_dbqwzwsi5bca3eil4gj6e6j4auThu, 18 Aug 2022 00:00:00 GMTModal expansions of ririgs
https://scholar.archive.org/work/65wx2jkglfdg7ajfn7xrrtoneu
In this paper we introduce the variety of I-modal ririgs. We characterize the congruence lattice of its members by means of I-filters and we provide a description on I-filter generation. We also provide an axiomatic presentation for the variety generated by chains of the subvariety of contractive I-modal ririgs. Finally, we introduce a Hilbert-style calculus of a logic with I-modal ririgs as an equivalent algebraic semantics and we prove that such a logic has the parametrized local deduction-detachment theorem.Agustín L. Nagy, William J. Zuluaga Boterowork_65wx2jkglfdg7ajfn7xrrtoneuTue, 09 Aug 2022 00:00:00 GMT3:1M: An Interpretable Dynamical AI Agent
https://scholar.archive.org/work/xa5pexcsm5hurkapbbaap2lstm
Developmental Psychology has been used in conjunction with the algorithmic and/or dynamical methods to design AI agents, as it deals with human cognitive development from birth. When applied, the conjunction with the algorithmic method can build interpretable AI agents (have humanly comprehensible processes and characteristics) through the use of concept-representing symbols. It, however, leads to symbol-grounding and/or other problems that o set its merit on interpretability, which may facilitate the creation of human intelligence-level AGI. On the other hand, the conjunction with the dynamical method emphasises in the design of AI agents on the real-time unfolding of processes internal to, and in the interactions with the environment of, these agents. However, its related literature is dominated by work on uninterpretable AI agents. Also, none of the current literatures on the combination of Developmental Psychology with the hybrid of the two methodologies deal with the resolution of all the mentioned deficiencies. Thus, the novel AI agent 3.1M is presented in this paper, and modelled as a set of dynamical di erence equations containing Developmental Psychology-based and concept-representing mathematical operators acting on information, some of which are derived from its interactions with an artificial environment. Its interpretability is facilitated by the operators, as demonstrated in the analytical investigations on its computer simulation-confirmed cognitive abilities (e.g., the generalization and fusion of the information), and on the dynamics of the information attribute inspired by the nature of human consciousness. The dynamics is one of the major 3•1M features envisioned as necessary to create the future developmental AGI version of 3M, i.e., the abstract base class of 3•1M.Manuel Abellowork_xa5pexcsm5hurkapbbaap2lstmTue, 19 Jul 2022 00:00:00 GMT3:1M: An Interpretable Dynamical AI Agent
https://scholar.archive.org/work/dwyuwd3tgbcxng5nsiv6n5v6ru
Developmental Psychology has been used in conjunction with the algorithmic and/or dynamical methods to design AI agents, as it deals with human cognitive development from birth. When applied, the conjunction with the algorithmic method can build interpretable AI agents (have humanly comprehensible processes and characteristics) through the use of concept-representing symbols. It, however, leads to symbol-grounding and/or other problems that o set its merit on interpretability, which may facilitate the creation of human intelligence-level AGI. On the other hand, the conjunction with the dynamical method emphasises in the design of AI agents on the real-time unfolding of processes internal to, and in the interactions with the environment of, these agents. However, its related literature is dominated by work on uninterpretable AI agents. Also, none of the current literatures on the combination of Developmental Psychology with the hybrid of the two methodologies deal with the resolution of all the mentioned deficiencies. Thus, the novel AI agent 3.1M is presented in this paper, and modelled as a set of dynamical di erence equations containing Developmental Psychology-based and concept-representing mathematical operators acting on information, some of which are derived from its interactions with an artificial environment. Its interpretability is facilitated by the operators, as demonstrated in the analytical investigations on its computer simulation-confirmed cognitive abilities (e.g., the generalization and fusion of the information), and on the dynamics of the information attribute inspired by the nature of human consciousness. The dynamics is one of the major 3•1M features envisioned as necessary to create the future developmental AGI version of 3M, i.e., the abstract base class of 3•1M.Manuel Abellowork_dwyuwd3tgbcxng5nsiv6n5v6ruTue, 19 Jul 2022 00:00:00 GMTSupporting Explainable AI on Semantic Constraint Validation
https://scholar.archive.org/work/y5gfpqy7ijhezhcgwfsgwknt54
There is a rising number of knowledge graphs available published through various sources. The enormous amount of linked data strives to give entities a semantic context. Using SHACL, the entities can be validated with respect to their context. On the other hand, an increasing usage of AI models in productive systems comes with a great responsibility in various areas. Predictive models like linear, logistic regression, and tree-based models, are still frequently used as they come with a simple structure, which allows for interpretability. However, explaining models includes verifying whether the model makes predictions based on human constraints or scientific facts. This work proposes to use the semantic context of the entities in knowledge graphs to validate predictive models with respect to user-defined constraints; therefore, providing a theoretical framework for a model-agnostic validation engine based on SHACL. In a second step, the model validation results are summarized in the case of a decision tree and visualized model-coherently. Finally, the performance of the framework is evaluated based on a Python implementation.Julian Alexander Gercke, Technische Informationsbibliothek (TIB), Philipp D. Rohde, Maria-Ester Vidalwork_y5gfpqy7ijhezhcgwfsgwknt54Mon, 18 Jul 2022 00:00:00 GMTThirty-seven years of relational Hoare logic: remarks on its principles and history
https://scholar.archive.org/work/qeof675gqzghjko3stly5timd4
Relational Hoare logics extend the applicability of modular, deductive verification to encompass important 2-run properties including dependency requirements such as confidentiality and program relations such as equivalence or similarity between program versions. A considerable number of recent works introduce different relational Hoare logics without yet converging on a core set of proof rules. This paper looks backwards to little known early work. This brings to light some principles that clarify and organize the rules as well as suggesting a new rule and a new notion of completeness.David A. Naumannwork_qeof675gqzghjko3stly5timd4Sat, 16 Jul 2022 00:00:00 GMTProceedings of the SNS Logic Colloquium March 1990
https://scholar.archive.org/work/myebfuw7yfaw7a6vspe24lhx4u
Republication of the proceedings of the Informal Logic Colloquium held in March 1990 at the Seminar für natürlichsprachliche Systeme (SNS) of the University of Tübingen.Peter Schroeder-Heister, Universitaet Tuebingenwork_myebfuw7yfaw7a6vspe24lhx4uMon, 27 Jun 2022 00:00:00 GMTIntegrating deduction and model finding in a language independent setting
https://scholar.archive.org/work/4c5gc6p5dbbbxg6obbsj4blipy
Software artifacts are ubiquitous in our lives being an essential part of home appliances, cars, cel phones, and even in more critical activities like aeronautics and health sciences. In this context software failures may produce enormous losses, either economical or, in the extreme, in human lives. Software analysis is an area in software engineering concerned on the application of different techniques in order to prove the (relative) absence of errors in software artifacts. In many cases these methods of analysis are applied by following certain methodological directives that ensure better results. In a previous work we presented the notion of satisfiability calculus as a model theoretical counterpart of Meseguer's proof calculus, providing a formal foundation for a variety of tools that are based on model construction. The present work shows how effective satisfiability sub-calculi, a special type of satisfiability calculi, can be combined with proof calculi, in order to provide foundations to certain methodological approaches to software analysis by relating the construction of finite counterexamples and the absence of proofs, in an abstract categorical setting.Carlos Gustavo Lopez Pombo, Agustín Eloy Martinez Suñéwork_4c5gc6p5dbbbxg6obbsj4blipyTue, 14 Jun 2022 00:00:00 GMTPractical synthesis from real-world oracles
https://scholar.archive.org/work/xlieanq2hfdyzoompffjj2e254
As software systems become increasingly heterogeneous, the ability of compilers to reason about an entire system has decreased. When components of a system are not implemented as traditional programs, but rather as specialised hardware, optimised architecture-specific libraries, or network services, the compiler is unable to cross these abstraction barriers and analyse the system as a whole. If these components could be modelled or understood as programs, then the compiler would be able to reason about their behaviour without concern for their internal implementation details: a homogeneous view of the entire system would be afforded. However, it is not often the case that such components ever corresponded to an original program. This means that to facilitate this homogenenous analysis, programmatic models of component behaviour must be learned or constructed automatically. Constructing these models is an inductive program synthesis problem, albeit a challenging one that is largely beyond the ability of existing implementations. In order for the problem to be made tractable, information provided by the underlying context (i.e. the real component behaviour to be matched) must be integrated. This thesis presents three program synthesis approaches that integrate contextual information to synthesise programmatic models for real, existing components. The first, Annote, exploits informally-encoded information about a component's interface (e.g. from documentation) by weaving that information into an extended type-and-attribute system for component interfaces. The second, Presyn, learns a pair of cooperating probabilistic models from prior syntheses, that aim to predict likely program structure based on a component's interface. Finally, Haze uses observations of common side-effects of component executions to bias the search for programs. These approaches are each evaluated against comparable synthesisers from the literature, on a set of benchmark problems derived from real components. Learning models for component behavi [...]Bruce Collie, University Of Edinburgh, Michael O'Boyle, Myrto Arapinis, Bjoern Frankework_xlieanq2hfdyzoompffjj2e254Mon, 13 Jun 2022 00:00:00 GMTFaust and Reactive Functions: A Traced Prop of Signals and Signal Relations
https://scholar.archive.org/work/glsrf77krzazrch5pdjoahfyma
I propose a categorical interpretation of the algebra presented by Orlarey et al in their paper An Algebra For Block Diagrams. The category in question is the Traced Prop of Relations on Signals. Where Sequential and Parallel composition are relational composition and monoidal product respectively, and Recursive composition is a combination of these, the Trace and more. In this interpretation reactive functions correspond to signal processors. I prove the theorem that the trace of any delayed reactive function is a reactive function. This makes explicit the need for an implicit delay in the definition of Recursion. Furthermore, I show, by way of preserving reactivity and functionality, that each of the five main FAUST operators returns a valid signal processor when fed two of them.Nicholas Connellwork_glsrf77krzazrch5pdjoahfymaTue, 07 Jun 2022 00:00:00 GMTFaust and Reactive Functions: A Traced Prop of Signals and Signal Relations
https://scholar.archive.org/work/c6nxqz3qjbgprem32o2ebbksk4
I propose a categorical interpretation of the algebra presented by Orlarey et al in their paper An Algebra For Block Diagrams. The category in question is the Traced Prop of Relations on Signals. Where Sequential and Parallel composition are relational composition and monoidal product respectively, and Recursive composition is a combination of these, the Trace and more. In this interpretation reactive functions correspond to signal processors. I prove the theorem that the trace of any delayed reactive function is a reactive function. This makes explicit the need for an implicit delay in the definition of Recursion. Furthermore, I show, by way of preserving reactivity and functionality, that each of the five main FAUST operators returns a valid signal processor when fed two of them.Nicholas Connellwork_c6nxqz3qjbgprem32o2ebbksk4Tue, 07 Jun 2022 00:00:00 GMTTecnologias, Técnicas e Tendências em Engenharia Biomédica
https://scholar.archive.org/work/tiaituvzg5h7ziymjzabn2rydm
A Engenharia Biomédica (EB) é uma área de interface que agrega profissionais e pesquisadores de diversas disciplinas. Devido à grande abrangência da EB muitas vezes pode ser difícil delimitar suas fronteiras. Neste contexto, esta obra apresenta uma coletânea de artigos que define e discute aplicações da EB em diversos cenários. A expectativa é prover, ao leitor, exemplos concretos e práticos que o auxiliem na identificação de tecnologias, técnicas e tendências em EB. Este livro é resultado das inúmeras ações para a disseminação do conhecimento e promoção da EB no Brasil que foram realizadas durante o XXIV Congresso Brasileiro de Engenharia Biomédica (CBEB 2014), promovido pela Sociedade Brasileira de Engenharia Biomédica (SBEB - www.sbeb.org.br), que aconteceu de 13 a 17 de outubro de 2014, em Uberlândia, Minas Gerais, Brasil. A obra reuniu a contribuição de pesquisadores renomados que apresentaram e discutiram temas atuais e de grande interesse no cenário nacional e internacional: Ética e Saúde Pública, Avaliação de Tecnologias em Saúde, Imagens Médicas, Óptica Biomédica, Biologia Computacional, Biossensores, Processamento e Modelagem de Sistemas Biológicos, Engenharia de Reabilitação, Instrumentação Biomédica. Todos os artigos foram avaliados por pares que contribuíram significativamente para a excelência técnica e científica deste livro. Os editores agradecem a todos os revisores que doaram tempo e experiência, e também às agências de fomento (CAPES, CNPq, FAPEMIG) que financiaram a publicação da obra. EditoresAdriano de Oliveira Andrade, Alcimar Barbosa Soares, Alexandre Cardoso, Edgard Afonso Lamounierwork_tiaituvzg5h7ziymjzabn2rydmWed, 20 Apr 2022 00:00:00 GMTMortensen Logics
https://scholar.archive.org/work/tf2jrrlebvcoveu7ape4dab5tq
Mortensen introduced a connexive logic commonly known as 'M3V'. M3V is obtained by adding a special conditional to LP. Among its most notable features, besides its being connexive, M3V is negation-inconsistent and it validates the negation of every conditional. But Mortensen has also studied and applied extensively other non-connexive logics, for example, closed set logic, CSL, and a variant of Sette's logic, identified and called 'P2' by Marcos. In this paper, we analyze and compare systematically the connexive variants of CSL and P2, obtained by adding the M3V conditional to them. Our main observations are two. First, that the inconsistency of M3V is exacerbated in the connexive variant of closed set logic, while it is attenuated in the connexive variant of the Sette-like P2. Second, that the M3V conditional is, unlike other conditionals, "connexively stable", meaning that it remains connexive when combined with the main paraconsistent negations.Luis Estrada-González, Fernando Cano-Jorgework_tf2jrrlebvcoveu7ape4dab5tqThu, 14 Apr 2022 00:00:00 GMTFoundations for programming and implementing effect handlers
https://scholar.archive.org/work/b2vymvtluzesrpbgsv22n4f3nm
First-class control operators provide programmers with an expressive and efficient means for manipulating control through reification of the current control state as a first-class object, enabling programmers to implement their own computational effects and control idioms as shareable libraries. Effect handlers provide a particularly structured approach to programming with first-class control by naming control reifying operations and separating from their handling. This thesis is composed of three strands of work in which I develop operational foundations for programming and implementing effect handlers as well as exploring the expressive power of effect handlers. The first strand develops a fine-grain call-by-value core calculus of a statically typed programming language with a structural notion of effect types, as opposed to the nominal notion of effect types that dominates the literature. With the structural approach, effects need not be declared before use. The usual safety properties of statically typed programming are retained by making crucial use of row polymorphism to build and track effect signatures. The calculus features three forms of handlers: deep, shallow, and parameterised. They each offer a different approach to manipulate the control state of programs. Traditional deep handlers are defined by folds over computation trees, and are the original con-struct proposed by Plotkin and Pretnar. Shallow handlers are defined by case splits (rather than folds) over computation trees. Parameterised handlers are deep handlers extended with a state value that is threaded through the folds over computation trees. To demonstrate the usefulness of effects and handlers as a practical programming abstraction I implement the essence of a small UNIX-style operating system complete with multi-user environment, time-sharing, and file I/O. The second strand studies continuation passing style (CPS) and abstract machine semantics, which are foundational techniques that admit a unified basis for implementing deep, shallow [...]Daniel Hillerström, University Of Edinburgh, Sam Lindley, John Longleywork_b2vymvtluzesrpbgsv22n4f3nmTue, 12 Apr 2022 00:00:00 GMTDagstuhl Reports, Volume 11, Issue 10, October 2021, Complete Issue
https://scholar.archive.org/work/3w5nqw2gangnrkuqgfzp32cw4u
Dagstuhl Reports, Volume 11, Issue 10, October 2021, Complete Issuework_3w5nqw2gangnrkuqgfzp32cw4uMon, 11 Apr 2022 00:00:00 GMT