Stack-based Multi-layer Attention for Transition-based Dependency Parsing

Zhirui Zhang, Shujie Liu, Mu Li, Ming Zhou, Enhong Chen
2017 Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing  
Although sequence-to-sequence (seq2seq) network has achieved significant success in many NLP tasks such as machine translation and text summarization, simply applying this approach to transition-based dependency parsing cannot yield a comparable performance gain as in other stateof-the-art methods, such as stack-LSTM and head selection. In this paper, we propose a stack-based multi-layer attention model for seq2seq learning to better leverage structural linguistics information. In our method,
more » ... o binary vectors are used to track the decoding stack in transition-based parsing, and multi-layer attention is introduced to capture multiple word dependencies in partial trees. We conduct experiments on PTB and CTB datasets, and the results show that our proposed model achieves state-of-the-art accuracy and significant improvement in labeled precision with respect to the baseline seq2seq model.
doi:10.18653/v1/d17-1175 dblp:conf/emnlp/ZhangLLZC17 fatcat:p2yahhq36rbn5l7eumc3sbpwpm