Augmenting Neural Networks with First-order Logic

Tao Li, Vivek Srikumar
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Today, the dominant paradigm for training neural networks involves minimizing task loss on a large dataset. Using world knowledge to inform a model, and yet retain the ability to perform end-to-end training remains an open question. In this paper, we present a novel framework for introducing declarative knowledge to neural network architectures in order to guide training and prediction. Our framework systematically compiles logical statements into computation graphs that augment a neural
more » ... without extra learnable parameters or manual redesign. We evaluate our modeling strategy on three tasks: machine comprehension, natural language inference, and text chunking. Our experiments show that knowledge-augmented networks can strongly improve over baselines, especially in low-data regimes.
doi:10.18653/v1/p19-1028 dblp:conf/acl/LiS19 fatcat:fd267yehgnggphq7jtznfzi2la