A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Maximizing SLU Performance with Minimal Training Data Using Hybrid RNN Plus Rule-based Approach
2018
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
Spoken language understanding (SLU) by using recurrent neural networks (RNN) achieves good performances for large training data sets, but collecting large training datasets is a challenge, especially for new voice applications. Therefore, the purpose of this study is to maximize SLU performances, especially for small training data sets. To this aim, we propose a novel CRF-based dialog act selector which chooses suitable dialog acts from outputs of RNN SLU and rule-based SLU. We evaluate the
doi:10.18653/v1/w18-5043
dblp:conf/sigdial/HommaADT18
fatcat:fwxcag23onh37bvj4tpexfreii