A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2015; you can also visit the original URL.
The file type is application/pdf
.
Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding
2015
IEEE/ACM Transactions on Audio Speech and Language Processing
Semantic slot filling is one of the most challenging problems in spoken language understanding (SLU). In this paper, we propose to use recurrent neural networks (RNNs) for this task, and present several novel architectures designed to efficiently model past and future temporal dependencies. Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants. To facilitate reproducibility, we implemented these networks with the publicly
doi:10.1109/taslp.2014.2383614
fatcat:yn7sgsgn7nfevfbnn6byk4nm2q