A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Recurrent Neural Networks with Mixed Hierarchical Structures and EM Algorithm for Natural Language Processing
[article]
2022
arXiv
pre-print
How to obtain hierarchical representations with an increasing level of abstraction becomes one of the key issues of learning with deep neural networks. A variety of RNN models have recently been proposed to incorporate both explicit and implicit hierarchical information in modeling languages in the literature. In this paper, we propose a novel approach called the latent indicator layer to identify and learn implicit hierarchical information (e.g., phrases), and further develop an EM algorithm
arXiv:2201.08919v1
fatcat:jubyuozwrjcydalzpcz26upo54