A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Stack-based Multi-layer Attention for Transition-based Dependency Parsing
2017
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Although sequence-to-sequence (seq2seq) network has achieved significant success in many NLP tasks such as machine translation and text summarization, simply applying this approach to transition-based dependency parsing cannot yield a comparable performance gain as in other stateof-the-art methods, such as stack-LSTM and head selection. In this paper, we propose a stack-based multi-layer attention model for seq2seq learning to better leverage structural linguistics information. In our method,
doi:10.18653/v1/d17-1175
dblp:conf/emnlp/ZhangLLZC17
fatcat:p2yahhq36rbn5l7eumc3sbpwpm