A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Learning Knowledge Base Inference with Neural Theorem Provers
2016
Proceedings of the 5th Workshop on Automated Knowledge Base Construction
In this paper we present a proof-of-concept implementation of Neural Theorem Provers (NTPs), end-to-end differentiable counterparts of discrete theorem provers that perform first-order inference on vector representations of symbols using function-free, possibly parameterized, rules. As such, NTPs follow a long tradition of neural-symbolic approaches to automated knowledge base inference, but differ in that they are differentiable with respect to representations of symbols in a knowledge base
doi:10.18653/v1/w16-1309
dblp:conf/akbc/RocktaschelR16
fatcat:5d42tylmcve3rocsjw7d3jueq4