IA Scholar Query: Strong normalization in a typed lambda calculus with lambda structured types.
https://scholar.archive.org/
Internet Archive Scholar query results feedeninfo@archive.orgThu, 04 Aug 2022 00:00:00 GMTfatcat-scholarhttps://scholar.archive.org/help1440Size-Based Termination for Non-Positive Types in Simply Typed Lambda-Calculus
https://scholar.archive.org/work/vsqkzzxvwvagpiupvleci3ayoe
So far, several typed lambda-calculus systems were combined with algebraic rewrite rules, and the termination (in other words, strong normalisation) problem of the combined systems was discussed. By the size-based approach, Blanqui formulated a termination criterion for simply typed lambda-calculus with algebraic rewrite rules which guarantees, in some specific cases, the termination of the rewrite relation induced by beta-reduction and algebraic rewrite rules on strictly or non-strictly positive inductive types. Using the inflationary fixed-point construction, we extend this termination criterion so that it is possible to show the termination of the rewrite relation induced by some rewrite rules on types which are called non-positive types. In addition, we note that a condition in Blanqui's proof can be dropped, and this improves the criterion also for non-strictly positive inductive types.Yuta Takahashi, Henning Basold, Jesper Cockx, Silvia Ghilezanwork_vsqkzzxvwvagpiupvleci3ayoeThu, 04 Aug 2022 00:00:00 GMTPrincipal Types as Lambda Nets
https://scholar.archive.org/work/4rtw5fisz5h5zokupuxaw4grna
We show that there are connections between principal type schemata, cut-free λ-nets, and normal forms of the λ-calculus, and hence there are correspondences between the normalisation algorithms of the above structures, i.e. unification of principal types, cut-elimination of λ-nets, and normalisation of λ-terms. Once the above correspondences have been established, properties of the typing system, such as typability, subject reduction, and inhabitation, can be derived from properties of λ-nets, and vice-versa. We illustrate the above pattern on a specific type assignment system, we study principal types for this system, and we show that they correspond to λ-nets with a non-standard notion of cut-elimination. Properties of the type system are then derived from results on λ-nets.Pietro Di Gianantonio, Marina Lenisa, Henning Basold, Jesper Cockx, Silvia Ghilezanwork_4rtw5fisz5h5zokupuxaw4grnaThu, 04 Aug 2022 00:00:00 GMTLIPIcs, Volume 239, TYPES 2021, Complete Volume
https://scholar.archive.org/work/uxb5tcwg6bflpdk4rslta5pn6a
LIPIcs, Volume 239, TYPES 2021, Complete VolumeHenning Basold, Jesper Cockx, Silvia Ghilezanwork_uxb5tcwg6bflpdk4rslta5pn6aThu, 04 Aug 2022 00:00:00 GMTThe Zoo of Lambda-Calculus Reduction Strategies, And Coq
https://scholar.archive.org/work/un3ebtmdjndf3ftbrrsf5g7b5e
We present a generic framework for the specification and reasoning about reduction strategies in the lambda calculus, representable as sets of term decompositions. It is provided as a Coq formalization that features a novel format of phased strategies. It facilitates concise description and algebraic reasoning about properties of reduction strategies. The formalization accommodates many well-known strategies, both weak and strong, such as call by name, call by value, head reduction, normal order, full β-reduction, etc. We illustrate the use of the framework as a tool to inspect and categorize the "zoo" of existing strategies, as well as to discover and study new ones with particular properties.Małgorzata Biernacka, Witold Charatonik, Tomasz Drab, June Andronick, Leonardo de Mourawork_un3ebtmdjndf3ftbrrsf5g7b5eWed, 03 Aug 2022 00:00:00 GMTContact and friction simulation for computer graphics
https://scholar.archive.org/work/a46z76uy3bawzjnjqbvigs372u
Efficient simulation of contact is of interest for numerous physics-based animation applications. For instance, virtual reality training, video games, rapid digital prototyping, and robotics simulation are all examples of applications that involve contact modeling and simulation. However, despite its extensive use in modern computer graphics, contact simulation remains one of the most challenging problems in physics-based animation. This course covers fundamental topics on the nature of contact modeling and simulation for computer graphics. Specifically, we provide mathematical details about formulating contact as a complementarity problem in rigid body and soft body animations. We briefly cover several approaches for contact generation using discrete collision detection. Then, we present a range of numerical techniques for solving the associated LCPs and NCPs. The advantages and disadvantages of each technique are further discussed in a practical manner, and best practices for implementation are discussed. Finally, we conclude the course with several advanced topics such as methods for soft body contact problems, barrier functions, and anisotropic friction modeling. Programming examples are provided in our appendix as well as on the course website to accompany the course notes.Sheldon Andrews, Kenny Erleben, Zachary Fergusonwork_a46z76uy3bawzjnjqbvigs372uTue, 02 Aug 2022 00:00:00 GMTDynamic deformables
https://scholar.archive.org/work/ail7xnlyuzeblargndqwropb3e
Simulating dynamic deformation has been an integral component of Pixar's storytelling since Boo's shirt in Monsters, Inc. (2001). Recently, several key transformations have been applied to Pixar's core simulator Fizt that improve its speed, robustness, and generality. Starting with Coco (2017), improved collision detection and response were incorporated into the cloth solver, then with Cars 3 (2017) 3D solids were introduced, and in Onward (2020) clothing is allowed to interact with a character's body with two-way coupling. The 3D solids are based on a fast, compact, and powerful new formulation that we have published over the last few years at SIGGRAPH. Under this formulation, the construction and eigendecomposition of the force gradient, long considered the most onerous part of the implementation, becomes fast and simple. We provide a detailed, self-contained, and unified treatment here that is not available in the technical papers. We also provide, for the first time, open-source C++ implementations of many of the described algorithms. This new formulation is only a starting point for creating a simulator that is up challenges of a production environment. One challenge is performance: we discuss our current best practices for accelerating system assembly and solver performance. Another challenge that requires considerable attention is robust collision detection and response. Much has been written about collision detection approaches such as proximity-queries, continuous collisions and global intersection analysis. We discuss our strategies for using these techniques, which provides us with valuable information that is needed to handle challenging scenarios.Theodore Kim, David Eberlework_ail7xnlyuzeblargndqwropb3eTue, 02 Aug 2022 00:00:00 GMTCurry and Howard Meet Borel
https://scholar.archive.org/work/4phmcjuulzhtdfz5bjhavvgwiq
We show that an intuitionistic version of counting propositional logic corresponds, in the sense of Curry and Howard, to an expressive type system for the probabilistic event λ-calculus, a vehicle calculus in which both call-by-name and call-by-value evaluation of discrete randomized functional programs can be simulated. In this context, proofs (respectively, types) do not guarantee that validity (respectively, termination) holds, but reveal the underlying probability. We finally show how to obtain a system precisely capturing the probabilistic behavior of λ-terms, by endowing the type system with an intersection operator.Melissa Antonelli, Ugo Dal Lago, Paolo Pistonework_4phmcjuulzhtdfz5bjhavvgwiqTue, 02 Aug 2022 00:00:00 GMTA strong call-by-need calculus
https://scholar.archive.org/work/3cbecf3jrraffmhhxzhagvwonm
We present a call-by-need λ-calculus that enables strong reduction (that is, reduction inside the body of abstractions) and guarantees that arguments are only evaluated if needed and at most once. This calculus uses explicit substitutions and subsumes the existing strong-call-by-need strategy, but allows for more reduction sequences, and often shorter ones, while preserving the neededness. The calculus is shown to be normalizing in a strong sense: Whenever a λ-term t admits a normal form n in the λ-calculus, then any reduction sequence from t in the calculus eventually reaches a representative of the normal form n. We also exhibit a restriction of this calculus that has the diamond property and that only performs reduction sequences of minimal length, which makes it systematically better than the existing strategy. We have used the Abella proof assistant to formalize part of this calculus, and discuss how this experiment affected its design. In particular, it led us to derive a new description of call-by-need reduction based on inductive rules.Thibaut Balabonskiwork_3cbecf3jrraffmhhxzhagvwonmTue, 02 Aug 2022 00:00:00 GMTA direct computational interpretation of second-order arithmetic via update recursion
https://scholar.archive.org/work/t62vigj2uzestl2e7chawgac7u
Second-order arithmetic has two kinds of computational interpretations: via Spector's bar recursion of via Girard's polymorphic lambda-calculus. Bar recursion interprets the negative translation of the axiom of choice which, combined with an interpretation of the negative translation of the excluded middle, gives a computational interpretation of the negative translation of the axiom scheme of comprehension. It is then possible to instantiate universally quantified sets with arbitrary formulas (second-order elimination). On the other hand, polymorphic lambda-calculus interprets directly second-order elimination by means of polymorphic types. The present work aims at bridging the gap between these two interpretations by interpreting directly second-order elimination through update recursion, which is a variant of bar recursion.Valentin Blotwork_t62vigj2uzestl2e7chawgac7uTue, 02 Aug 2022 00:00:00 GMTQuantum Mechanics and General Relativity are fully compatible, and have a common origin: the expanding (hyper) balloon universe
https://scholar.archive.org/work/ahrw5avjlzedtmp3dydkf4d2y4
Please download the paper and then read the abstract (since crucial formulas are not appearing in this window, and also hyperlinks are not working). This is just an overall view: Relativity is 'inside the light cone' phenomena, while Quantum Mechanics is 'outside the light cone' phenomena, dictated just by the scale (whether we use human/astronomical scale or sub-atomic scale). This recent paper ['Quantum principle of relativity'; Andrzej Dragan, Artur Ekert, New. J. Phys. 22 (2020) 033098] has shown that every exotic Quantum effect like superposition, entanglement, probabilistic behavior, multiple paths etc. can be explained just by allowing superluminal possibility. The 'inside the light cone' phenomena, and the 'outside the light cone' phenomena together span the entire region within the space and time axes. Only in unison they complete the entire picture. We failed to realize that the same spacetime is getting split into 'space like' and 'time like' regions based on scale. And the reason behind this is not the magical (?) speed of light. That would have turned relativity into just a branch of electromagnetism. It turns out that the c is the radial expansion velocity of our universe. Special Relativity and Quantum Mechanics (QM) are like two sides of the same coin. But understanding the relation between QM and General Relativity (GR) is a bit tricky, because according to GR, gravity is the warping/curvature of the 4 dimensional spacetime itself. Once I cover this topic, it becomes clear that QM and GR are fully compatible. Nature simply cannot afford to make our two greatest theories incompatible. We do not need to quantize gravity, since gravity is not a true force, and we have already achieved all the necessary quantizations for the other 3 forces of nature. Unfortunately all modern research towards unifying QM and GR are intensely focused on 'Quantum Gravity'. [By the way, LHC (CERN) results are tightening the noose on the [...]Subhajit Waughwork_ahrw5avjlzedtmp3dydkf4d2y4Sat, 30 Jul 2022 00:00:00 GMTQuantum Mechanics and General Relativity are fully compatible, and have a common origin: the expanding (hyper) balloon universe
https://scholar.archive.org/work/vc3qbtcxgnatrod47bcpqwmopu
Please download the paper and then read the abstract (since crucial formulas are not appearing in this window, and also hyperlinks are not working). This is just an overall view: [Please feel free to offer your comments at sub2007waugh@gmail.com Your feedback/suggestions are most welcome] Relativity is 'inside the light cone' phenomena, while Quantum Mechanics is 'outside the light cone' phenomena, dictated just by the scale (whether we use human/astronomical scale or sub-atomic scale). This recent paper ['Quantum principle of relativity'; Andrzej Dragan, Artur Ekert, New. J. Phys. 22 (2020) 033098] has shown that every exotic Quantum effect like superposition, entanglement, probabilistic behavior, multiple paths etc. can be explained just by allowing superluminal possibility. The 'inside the light cone' phenomena, and the 'outside the light cone' phenomena together span the entire region within the space and time axes. Only in unison they complete the entire picture. We failed to realize that the same spacetime is getting split into 'space like' and 'time like' regions based on scale. And the reason behind this is not the magical (?) speed of light. That would have turned relativity into just a branch of electromagnetism. It turns out that the c is the radial expansion velocity of our universe. Special Relativity and Quantum Mechanics (QM) are like two sides of the same coin. But understanding the relation between QM and General Relativity (GR) is a bit tricky, because according to GR, gravity is the warping/curvature of the 4 dimensional spacetime itself. Once I cover this topic, it becomes clear that QM and GR are fully compatible. Nature simply cannot afford to make our two greatest theories incompatible. We do not need to quantize gravity, since gravity is not a true force, and we have already achieved all the necessary quantizations for the other 3 forces of nature. Unfortunately all moder [...]Subhajit Waughwork_vc3qbtcxgnatrod47bcpqwmopuSat, 30 Jul 2022 00:00:00 GMTRelating Functional and Imperative Session Types
https://scholar.archive.org/work/z6op7upzzravpczmf5iifupyea
Imperative session types provide an imperative interface to session-typed communication. In such an interface, channel references are first-class objects with operations that change the typestate of the channel. Compared to functional session type APIs, the program structure is simpler at the surface, but typestate is required to model the current state of communication throughout. Following an early work that explored the imperative approach, a significant body of work on session types has neglected the imperative approach and opts for a functional approach that uses linear types to manage channel references soundly. We demonstrate that the functional approach subsumes the early work on imperative session types by exhibiting a typing and semantics preserving translation into a system of linear functional session types. We further show that the untyped backwards translation from the functional to the imperative calculus is semantics preserving. We restrict the type system of the functional calculus such that the backwards translation becomes type preserving. Thus, we precisely capture the difference in expressiveness of the two calculi and conclude that the lack of expressiveness in the imperative calculus is largely due to restrictions imposed by its type system.Hannes Saffrich, Peter Thiemannwork_z6op7upzzravpczmf5iifupyeaThu, 28 Jul 2022 00:00:00 GMTAddressing Machines as models of lambda-calculus
https://scholar.archive.org/work/eplwtpmnbrbvzcbyaj2gx76ukm
Turing machines and register machines have been used for decades in theoretical computer science as abstract models of computation. Also the λ-calculus has played a central role in this domain as it allows to focus on the notion of functional computation, based on the substitution mechanism, while abstracting away from implementation details. The present article starts from the observation that the equivalence between these formalisms is based on the Church-Turing Thesis rather than an actual encoding of λ-terms into Turing (or register) machines. The reason is that these machines are not well-suited for modelling λ-calculus programs. We study a class of abstract machines that we call "addressing machine" since they are only able to manipulate memory addresses of other machines. The operations performed by these machines are very elementary: load an address in a register, apply a machine to another one via their addresses, and call the address of another machine. We endow addressing machines with an operational semantics based on leftmost reduction and study their behaviour. The set of addresses of these machines can be easily turned into a combinatory algebra. In order to obtain a model of the full untyped λ-calculus, we need to introduce a rule that bares similarities with the ω-rule and the rule ζ_β from combinatory logic.Giuseppe Della Penna and Benedetto Intrigila and Giulio Manzonettowork_eplwtpmnbrbvzcbyaj2gx76ukmThu, 28 Jul 2022 00:00:00 GMTAnatomical and Functional Connectivity at the Dendrodendritic Reciprocal Mitral Cell–Granule Cell Synapse: Impact on Recurrent and Lateral Inhibition
https://scholar.archive.org/work/jxvkuqix3vg27nzjrhxtlxfot4
In the vertebrate olfactory bulb, reciprocal dendrodendritic interactions between its principal neurons, the mitral and tufted cells, and inhibitory interneurons in the external plexiform layer mediate both recurrent and lateral inhibition, with the most numerous of these interneurons being granule cells. Here, we used recently established anatomical parameters and functional data on unitary synaptic transmission to simulate the strength of recurrent inhibition of mitral cells specifically from the reciprocal spines of rat olfactory bulb granule cells in a quantitative manner. Our functional data allowed us to derive a unitary synaptic conductance on the order of 0.2 nS. The simulations predicted that somatic voltage deflections by even proximal individual granule cell inputs are below the detection threshold and that attenuation with distance is roughly linear, with a passive length constant of 650 μm. However, since recurrent inhibition in the wake of a mitral cell action potential will originate from hundreds of reciprocal spines, the summated recurrent IPSP will be much larger, even though there will be substantial mutual shunting across the many inputs. Next, we updated and refined a preexisting model of connectivity within the entire rat olfactory bulb, first between pairs of mitral and granule cells, to estimate the likelihood and impact of recurrent inhibition depending on the distance between cells. Moreover, to characterize the substrate of lateral inhibition, we estimated the connectivity via granule cells between any two mitral cells or all the mitral cells that belong to a functional glomerular ensemble (i.e., which receive their input from the same glomerulus), again as a function of the distance between mitral cells and/or entire glomerular mitral cell ensembles. Our results predict the extent of the three regimes of anatomical connectivity between glomerular ensembles: high connectivity within a glomerular ensemble and across the first four rings of adjacent glomeruli, substantial connectivity to up to eleven glomeruli away, and negligible connectivity beyond. Finally, in a first attempt to estimate the functional strength of granule-cell mediated lateral inhibition, we combined this anatomical estimate with our above simulation results on attenuation with distance, resulting in slightly narrowed regimes of a functional impact compared to the anatomical connectivity.S. Sara Aghvami, Yoshiyuki Kubota, Veronica Eggerwork_jxvkuqix3vg27nzjrhxtlxfot4Fri, 22 Jul 2022 00:00:00 GMTRegularity and Neumann problems for operators with real coefficients satisfying Carleson condition
https://scholar.archive.org/work/xtj5arsrnvguxidaukwvosnmae
In this paper, we continue the study of a class of second order elliptic operators of the form ℒ=(A∇·) in a domain above a Lipschitz graph in ℝ^n, where the coefficients of the matrix A satisfy a Carleson measure condition, expressed as a condition on the oscillation on Whitney balls. For this class of operators, it is known (since 2001) that the L^q Dirichlet problem is solvable for some 1 < q < ∞. Moreover, further studies completely resolved the range of L^q solvability of the Dirichlet, Regularity, Neumann problems in Lipschitz domains, when the Carleson measure norm of the oscillation is sufficiently small. We show that there exists p_reg>1 such that for all 11 is the number such that the L^q Dirichlet problem for the adjoint operator ℒ^* is solvable for all q>q_*. Additionally when n=2, there exists p_neum>1 such that for all 11 is the number such that the L^q Dirichlet problem for the operator ℒ_1=(A_1∇·) with matrix A_1=A/A is solvable for all q>q^*.Martin Dindoš, Steve Hofmann, Jill Pipherwork_xtj5arsrnvguxidaukwvosnmaeThu, 21 Jul 2022 00:00:00 GMTA Totally Predictable Outcome: An Investigation of Traversals of Infinite Structures
https://scholar.archive.org/work/r3x3v6xxnzcxdf4udfraduoimu
Functors with an instance of the Traversable type class can be thought of as data structures which permit a traversal of their elements. This has been made precise by the correspondence between traversable functors and finitary containers (also known as polynomial functors) -- established in the context of total, necessarily terminating, functions. However, the Haskell language is non-strict and permits functions that do not terminate. It has long been observed that traversals can at times in fact operate over infinite lists, for example in distributing the Reader applicative. The result of such a traversal remains an infinite structure, however it nonetheless is productive -- i.e. successive amounts of finite computation yield either termination or successive results. To investigate this phenomenon, we draw on tools from guarded recursion, making use of equational reasoning directly in Haskell.Gershom Bazermanwork_r3x3v6xxnzcxdf4udfraduoimuWed, 20 Jul 2022 00:00:00 GMTAdding Negation to Lambda Mu
https://scholar.archive.org/work/y4d7erqaizanff7brbi2y45dpu
We present L, an extension of Parigot's λμ-calculus by adding negation as a type constructor, together with syntactic constructs that represent negation introduction and elimination. We will define a notion of reduction that extends λμ's reduction system with two new reduction rules, and show that the system satisfies subject reduction. Using Aczel's generalisation of Tait and Martin-Löf's notion of parallel reduction, we show that this extended reduction is confluent. Although the notion of type assignment has its limitations with respect to representation of proofs in natural deduction with implication and negation, we will show that all propositions that can be shown in there have a witness in L. Using Girard's approach of reducibility candidates, we show that all typeable terms are strongly normalisable, and conclude the paper by showing that type assignment for L enjoys the principal typing property.Steffen van Bakelwork_y4d7erqaizanff7brbi2y45dpuWed, 20 Jul 2022 00:00:00 GMTCentral Submonads and Notions of Computation
https://scholar.archive.org/work/g5njwrxa6ba4lappyanhdfrtbi
The notion of "centre" has been introduced for many algebraic structures in mathematics. A notable example is the centre of a monoid which always determines a commutative submonoid. Monads in category theory are important algebraic structures that may be used to model computational effects in programming languages and in this paper we show how the notion of centre may be extended to strong monads acting on symmetric monoidal categories. We show that the centre of a strong monad 𝒯, if it exists, determines a commutative submonad 𝒵 of 𝒯, such that the Kleisli category of 𝒵 is isomorphic to the premonoidal centre (in the sense of Power and Robinson) of the Kleisli category of 𝒯. We provide three equivalent conditions which characterise the existence of the centre of 𝒯 and we show that every strong monad on many well-known naturally occurring categories does admit a centre, thereby showing that this new notion is ubiquitous. We also provide a computational interpretation of our ideas which consists in giving a refinement of Moggi's monadic metalanguage. The added benefit is that this allows us to immediately establish a large class of contextually equivalent programs for computational effects that are described via monads that admit a non-trivial centre by simply considering the richer syntactic structure provided by the refinement.TItouan Carette, Louis Lemonnier, Vladimir Zamdzhievwork_g5njwrxa6ba4lappyanhdfrtbiTue, 19 Jul 2022 00:00:00 GMTHow to Safely Use Extensionality in Liquid Haskell
https://scholar.archive.org/work/qmhznf7vtna37fwnqpzhcd3nou
Refinement type checkers are a powerful way to reason about functional programs. For example, one can prove properties of a slow, specification implementation, porting the proofs to an optimized implementation that behaves the same. Without functional extensionality, proofs must relate functions that are fully applied. When data itself has a higher-order representation, fully applied proofs face serious impediments! When working with first-order data, fully applied proofs lead to noisome duplication when using higher-order functions. While dependent type theories are typically consistent with functional extensionality axioms, refinement type systems with semantic subtyping treat naive phrasings of functional extensionality inconsistently, leading to unsoundness. We demonstrate this unsoundness and develop a new approach to equality in Liquid Haskell: we define a propositional equality in a library we call PEq. Using PEq avoids the unsoundness while still proving useful equalities at higher types; we demonstrate its use in several case studies. We validate PEq by building a small model and developing its metatheory. Additionally, we prove metaproperties of PEq inside Liquid Haskell itself using an unnamed folklore technique, which we dub 'classy induction'.Niki Vazou, Michael Greenbergwork_qmhznf7vtna37fwnqpzhcd3nouTue, 19 Jul 2022 00:00:00 GMTMulti Types and Reasonable Space (Long Version)
https://scholar.archive.org/work/bucbcecyc5gabdwcxpvtv7ozgm
Accattoli, Dal Lago, and Vanoni have recently proved that the space used by the Space KAM, a variant of the Krivine abstract machine, is a reasonable space cost model for the lambda-calculus accounting for logarithmic space, solving a longstanding open problem. In this paper, we provide a new system of multi types (a variant of intersection types) and extract from multi type derivations the space used by the Space KAM, capturing into a type system the space complexity of the abstract machine. Additionally, we show how to capture also the time of the Space KAM, which is a reasonable time cost model, via minor changes to the type system.Beniamino Accattoli, Ugo Dal Lago, Gabriele Vanoniwork_bucbcecyc5gabdwcxpvtv7ozgmMon, 18 Jul 2022 00:00:00 GMT