A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Zero-Shot Information Extraction as a Unified Text-to-Triple Translation
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
We cast a suite of information extraction tasks into a text-to-triple translation framework. Instead of solving each task relying on taskspecific datasets and models, we formalize the task as a translation between task-specific input text and output triples. By taking the taskspecific input, we enable a task-agnostic translation by leveraging the latent knowledge that a pre-trained language model has about the task. We further demonstrate that a simple pretraining task of predicting which
doi:10.18653/v1/2021.emnlp-main.94
fatcat:tkv47zq7qbertlzueslfxqrsai