Filters








29 Hits in 1.6 sec

Tree Neural Networks in HOL4 [article]

Thibault Gauthier
2020 arXiv   pre-print
We present an implementation of tree neural networks within the proof assistant HOL4. Their architecture makes them naturally suited for approximating functions whose domain is a set of formulas.  ...  Conclusion In this paper, we presented an implementation of tree neural networks(TNNs) in HOL4 that can be used to learn a function on HOL4 formulas from examples.  ...  In the case of formulas, tree neural networks(TNNs) [8] capture the compositional nature of the underlying functions as their structure dynamically imitates the tree structure of the formula considered  ... 
arXiv:2009.01827v1 fatcat:vov4ztwwe5a5foplznmbwdpl3y

Deep Reinforcement Learning for Synthesizing Functions in Higher-Order Logic [article]

Thibault Gauthier
2020 arXiv   pre-print
A close interaction between the machine learning modules and the HOL4 library is achieved by the choice of tree neural networks (TNNs) as machine learning models and the internal use of HOL4 terms to represent  ...  In this case, a Monte Carlo Tree Search (MCTS) algorithm guided by a TNN can be used to explore the search space and produce better examples for training the next TNN.  ...  Definition (Tree neural network) We define a tree neural network to be a set of feed-forward neural networks with n layers and a tanh activation function for each layer.  ... 
arXiv:1910.11797v3 fatcat:bqtdfbrcxrcc5p6e3tzygogxsy

Learned Provability Likelihood for Tactical Search

Thibault Gauthier
2021 Electronic Proceedings in Theoretical Computer Science  
Experiments over the HOL4 library show an increase in the number of theorems re-proven by TacticToe thanks to this additional guidance.  ...  We adapt the tactical theorem prover TacticToe to factor in these estimations.  ...  Therefore, in this project, a tree neural network (TNN) is taught a function estimating the provabilty of goals in HOL4 [14] .  ... 
doi:10.4204/eptcs.342.7 fatcat:xyp7yj4gond3vbw233kaordqoe

Learning to Prove with Tactics [article]

Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, Michael Norrish
2018 arXiv   pre-print
This knowledge is then used in a Monte Carlo tree search algorithm to explore promising tactic-level proof paths.  ...  We implement a automated tactical prover TacticToe on top of the HOL4 interactive theorem prover. TacticToe learns from human proofs which mathematical technique is suitable in each proof situation.  ...  Remark 1 Neural networks trained through reinforcement learning can be very effective for approximating the policy and evaluation as demonstrated in .e.g. AlphaGo Zero [29] .  ... 
arXiv:1804.00596v1 fatcat:twmo2yoiwrfaxmj2onudf6sw7a

Bio-Inspired Genetic Algorithms with Formalized Crossover Operators for Robotic Applications

Jie Zhang, Man Kang, Xiaojuan Li, Geng-yang Liu
2017 Frontiers in Neurorobotics  
“Path planning for unmanned aerial vehicle based on genetic algorithm & artificial neural network in 3D,” in Proceeding of the Conference on Data Mining and Intelligent Computing (ICDMIC '14 ), (New Delhi  ...  In Figure 4, the implementation procedure of the crossover operation can be viewed as a binary tree where the number of crossover points corresponds to the height of the binary tree.  ... 
doi:10.3389/fnbot.2017.00056 pmid:29114217 pmcid:PMC5660715 fatcat:fdsr2wflrvalvguw2gyiwl3dpm

HOList: An Environment for Machine Learning of Higher-Order Theorem Proving [article]

Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox
2019 arXiv   pre-print
Neural Architectures For the generation and ranking of actions in the action generator, we use a deep, two-tower neural network depicted in Figure 1 .  ...  The first use of deep neural networks for large scale theorem proving was proposed in [19] .  ... 
arXiv:1904.03241v3 fatcat:ih4fizuonrbvzk2oyu4pekhftu

Graph Representations for Higher-Order Logic and Theorem Proving [article]

Aditya Paliwal, Sarah Loos, Markus Rabe, Kshitij Bansal, Christian Szegedy
2019 arXiv   pre-print
This paper presents the first use of graph neural networks (GNNs) for higher-order proof search and demonstrates that GNNs can improve upon state-of-the-art results in this domain.  ...  In this paper, we consider several graphical representations of higher-order logic and evaluate them against the HOList benchmark for higher-order theorem proving.  ...  Graph Neural Networks Graph neural networks (GNNs) compute embeddings for nodes in a graph via consecutive rounds (also called hops) of end-to-end differentiable message passing.  ... 
arXiv:1905.10006v2 fatcat:kokjdqbvpvgvbfl4blxmbrafpe

Graph Representations for Higher-Order Logic and Theorem Proving

Aditya Paliwal, Sarah Loos, Markus Rabe, Kshitij Bansal, Christian Szegedy
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
This paper presents the first use of graph neural networks (GNNs) for higher-order proof search and demonstrates that GNNs can improve upon state-of-the-art results in this domain.  ...  In this paper, we consider several graphical representations of higher-order logic and evaluate them against the HOList benchmark for higher-order theorem proving.  ...  Graph Neural Networks Graph neural networks (GNNs) compute embeddings for nodes in a graph via consecutive rounds (also called hops) of end-to-end differentiable message passing.  ... 
doi:10.1609/aaai.v34i03.5689 fatcat:7dwvu7ml4zgmdfzcjofyltc2ga

Learning Equational Theorem Proving [article]

Jelle Piepenbrock, Tom Heskes, Mikoláš Janota, Josef Urban
2021 arXiv   pre-print
To develop the methods, we first use two simpler arithmetic rewriting tasks that share tree-structured proof states and sparse rewards with the AIM problems.  ...  In the cooperative mode, the final system is combined with the Prover9 system, proving in 2 seconds what standalone Prover9 proves in 60 seconds.  ...  tree neural network embedding.  ... 
arXiv:2102.05547v1 fatcat:gjt2mmtbxvezlehy4za5ll3rs4

Mathematical Reasoning via Self-supervised Skip-tree Training [article]

Markus N. Rabe and Dennis Lee and Kshitij Bansal and Christian Szegedy
2020 arXiv   pre-print
To train language models for formal mathematics, we propose a novel skip-tree task.  ...  We also analyze the models' ability to formulate new conjectures by measuring how often the predictions are provable and useful in other proofs.  ...  Can neural networks learn symbolic rewriting? 33(3-4):319-339, 2004.  ... 
arXiv:2006.04757v3 fatcat:yrmqpmijjzh6rcnkm3b77k3mwy

JEFL: Joint Embedding of Formal Proof Libraries [article]

Qingxiang Wang, Cezary Kaliszyk
2021 arXiv   pre-print
Our approach is based on the fasttext implementation of Word2Vec, on top of which a tree traversal module is added to adapt its algorithm to the representation format of our data export pipeline.  ...  We compare the explainability, customizability, and online-servability of the approaches and argue that the neural embedding approach has more potential to be integrated into an interactive proof assistant  ...  There are in total 18723 and 16874 lines of tt items in HOL4 and HOL Light, respectively.  ... 
arXiv:2107.10188v1 fatcat:wknmuawpkvdqvopkf5zb3akc4y

Learning to Prove Theorems by Learning to Generate Theorems [article]

Mingzhe Wang, Jia Deng
2020 arXiv   pre-print
To address this limitation, we propose to learn a neural generator that automatically synthesizes theorems and proofs for the purpose of training a theorem prover.  ...  Experiments on real-world tasks demonstrate that synthetic data from our approach improves the theorem prover and advances the state of the art of automated theorem proving in Metamath.  ...  Relevance network of generator The relevance network in step 2 is a deep network trained to pick a proof tree from a set of candidates by scoring and ranking them.  ... 
arXiv:2002.07019v2 fatcat:64o5kj6et5c3lpgmjfdbvyk5gu

Learning to Reason in Large Theories without Imitation [article]

Kshitij Bansal, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, Viktor Toman
2020 arXiv   pre-print
In this paper, we demonstrate how to do automated theorem proving in the presence of a large knowledge base of potential premises without learning from human proofs.  ...  We suggest an exploration mechanism that mixes in additional premises selected by a tf-idf (term frequency-inverse document frequency) based lookup in a deep reinforcement learning scenario.  ...  Neural networks were first applied to premise selection for automated theorem proving in Alemi et al. [2016] .  ... 
arXiv:1905.10501v3 fatcat:ft65xynsgbfdxkzlcy4omg7ov4

Generating Correctness Proofs with Neural Networks [article]

Alex Sanchez-Stern and Yousef Alhessi and Lawrence Saul and Sorin Lerner
2020 arXiv   pre-print
In this paper we present Proverbot9001,a proof search system using machine learning techniques to produce proofs of software correctness in interactive theorem provers.  ...  Foundational verification allows programmers to build software which has been empirically shown to have high levels of assurance in a variety of important domains.  ...  A recurrent neural network.  ... 
arXiv:1907.07794v4 fatcat:u4pp33vevfafjk4jzd34rnclxi

Deep Reinforcement Learning for Synthesizing Functions in Higher-Order Logic

Thibault Gauthier
unpublished
A close interaction between the machine learning modules and the HOL4 library is achieved by the choice of tree neural networks (TNNs) as machine learning models and the internal use of HOL4 terms to represent  ...  In this case, a Monte Carlo Tree Search (MCTS) algorithm guided by a TNN can be used to explore the search space and produce better examples for training the next TNN.  ...  Definition (Tree neural network) We define a tree neural network to be a set of feed-forward neural networks with n layers and a tanh activation function for each layer.  ... 
doi:10.29007/7jmg fatcat:skbzeodwg5gefo4trnr2hiigja
« Previous Showing results 1 — 15 out of 29 results