Learning Knowledge Base Inference with Neural Theorem Provers

Tim Rocktäschel, Sebastian Riedel
2016 Proceedings of the 5th Workshop on Automated Knowledge Base Construction  
In this paper we present a proof-of-concept implementation of Neural Theorem Provers (NTPs), end-to-end differentiable counterparts of discrete theorem provers that perform first-order inference on vector representations of symbols using function-free, possibly parameterized, rules. As such, NTPs follow a long tradition of neural-symbolic approaches to automated knowledge base inference, but differ in that they are differentiable with respect to representations of symbols in a knowledge base
more » ... can thus learn representations of predicates, constants, as well as rules of predefined structure. Furthermore, they still allow us to incorporate domainknowledge provided as rules. The NTP presented here is realized via a differentiable version of the backward chaining algorithm. It operates on substitution representations and is able to learn complex logical dependencies from training facts of small knowledge bases.
doi:10.18653/v1/w16-1309 dblp:conf/akbc/RocktaschelR16 fatcat:5d42tylmcve3rocsjw7d3jueq4