A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
The Importance of Being Recurrent for Modeling Hierarchical Structure
2018
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
unpublished
Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016) . In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et
doi:10.18653/v1/d18-1503
fatcat:7dnwt3ov75hp7hvdcl6kqwzhnu