A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
LSTM Shift-Reduce CCG Parsing
2016
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
We describe a neural shift-reduce parsing model for CCG, factored into four unidirectional LSTMs and one bidirectional LSTM. This factorization allows the linearization of the complete parsing history, and results in a highly accurate greedy parser that outperforms all previous beam-search shift-reduce parsers for CCG. By further deriving a globally optimized model using a task-based loss, we improve over the state of the art by up to 2.67% labeled F1.
doi:10.18653/v1/d16-1181
dblp:conf/emnlp/Xu16
fatcat:raympzlaf5fdhltqx7jaeh7mwy