A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
BeliefBank: Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief
[article]
2021
arXiv
pre-print
Although pretrained language models (PTLMs) contain significant amounts of world knowledge, they can still produce inconsistent answers to questions when probed, even after specialized training. As a result, it can be hard to identify what the model actually "believes" about the world, making it susceptible to inconsistent behavior and simple errors. Our goal is to reduce these problems. Our approach is to embed a PTLM in a broader system that also includes an evolving, symbolic memory of
arXiv:2109.14723v1
fatcat:b5wgdj66gbb5zk3zpfkb4fi3q4