A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing
2021
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
unpublished
Predicting linearized Abstract Meaning Representation (AMR) graphs using pre-trained sequence-to-sequence Transformer models has recently led to large improvements on AMR parsing benchmarks. These parsers are simple and avoid explicit modeling of structure but lack desirable properties such as graph well-formedness guarantees or built-in graph-sentence alignments. In this work we explore the integration of general pre-trained sequence-to-sequence language models and a structure-aware
doi:10.18653/v1/2021.emnlp-main.507
fatcat:ewdruugcyzc2rcvl5xfoyxq7bu