Filters








6 Hits in 6.7 sec

Maximizing SLU Performance with Minimal Training Data Using Hybrid RNN Plus Rule-based Approach

Takeshi Homma, Adriano S. Arantes, Maria Teresa Gonzalez Diaz, Masahito Togami
2018 Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue  
Therefore, the purpose of this study is to maximize SLU performances, especially for small training data sets.  ...  Spoken language understanding (SLU) by using recurrent neural networks (RNN) achieves good performances for large training data sets, but collecting large training datasets is a challenge, especially for  ...  ML achieves good SLU performances for large training data sets. However, MLbased SLU with small training data results in poor performances.  ... 
doi:10.18653/v1/w18-5043 dblp:conf/sigdial/HommaADT18 fatcat:fwxcag23onh37bvj4tpexfreii

Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding

Gregoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, Geoffrey Zweig
2015 IEEE/ACM Transactions on Audio Speech and Language Processing  
In addition, we compared the approaches on two custom SLU data sets from the entertainment and movies domains.  ...  Specifically, we implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants.  ...  CONCLUSIONS We have proposed the use of recurrent neural networks for the SLU slot filling task, and performed a careful comparison of the standard RNN architectures, as well as hybrid, bi-directional,  ... 
doi:10.1109/taslp.2014.2383614 fatcat:yn7sgsgn7nfevfbnn6byk4nm2q

The EVALITA Dependency Parsing Task: From 2007 to 2011 [chapter]

Cristina Bosco, Alessandro Mazzei
2013 Lecture Notes in Computer Science  
EVALITA's shared tasks are aimed at contributing to the development and dissemination of natural language resources and technologies by proposing a shared context for training and evaluation.  ...  The co-location with CLiC-it potentially widens the potential audience of EVALITA.  ...  Acknowledgments Luca Atzori and Daniele Sartiano helped performing the experiments using embeddings and clusters.  ... 
doi:10.1007/978-3-642-35828-9_1 fatcat:p6dyjaxm4zbitfajtciwclwipu

Learning Task-Oriented Dialog with Neural Network Methods

Bing Liu
2018
In learning such system, we propose imitation and reinforcementlearning based methods for hybrid offline training and online interact [...]  ...  Firstly, thehandcrafted modules designed with domain specific rules inherently make it hard to extendan existing system to new domains.  ...  We propose an RNN-based online joint SLU model that performs intent detection and slot filling as the input word arrives.  ... 
doi:10.1184/r1/7224275.v1 fatcat:vnfk2fv4gbhb3mzezd4hkw7fta

Recurrent Neural Network Language Generation for Dialogue Systems

Tsung-Hsien Wen, Apollo-University Of Cambridge Repository, Apollo-University Of Cambridge Repository, Stephen Young
2018
A statistical approach to language generation can learn language decisions directly from data without relying on hand-coded rules or heuristics, which brings scalability and flexibility to NLG.  ...  The RNN-based surface realiser and gating mechanism use a neural network to learn end-to-end language generation decisions from input dialogue act and sentence pairs; it also integrates sentence planning  ...  rules and minimal handcrafted components.  ... 
doi:10.17863/cam.22900 fatcat:h2zjgj7zc5hgfkkqldxscgq72u

Reassessing inflectional regularity in Modern Greek conjugation [chapter]

Stavros Bompolas, Franco Alberto Cardillo, Marcello Ferro, Claudia Marzi, Vito Pirrelli
Proceedings of the Third Italian Conference on Computational Linguistics CLiC-it 2016  
Acknowledgments We would like to thank the Appetitoso team for making available the system and for providing us with the data for this work.  ...  Acknowledgments A special thank is due to Alberto Lavelli and Alessandro Mazzei for enabling us to carry out an exact comparison with their parser.  ...  According to the competition rules, the only training data we used are the ones that have been provided by the task organisers.  ... 
doi:10.4000/books.aaccademia.1721 fatcat:6mvr6rntdnhrlgxyqoqrvasc6u