Filters








303 Hits in 4.1 sec

A Comparison of Deep Learning Methods for Language Understanding

Mandy Korpusik, Zoe Liu, James Glass
2019 Interspeech 2019  
Information System (ATIS) benchmark, restaurant queries, and written and spoken meal descriptions.  ...  In this paper, we compare a suite of neural networks (recurrent, convolutional, and the recently proposed BERT model) to a CRF with hand-crafted features on three semantic tagging corpora: the Air Travel  ...  Related Work Substantial work has been devoted to spoken language understanding, specifically semantic tagging of the ATIS corpus for flights and air travel.  ... 
doi:10.21437/interspeech.2019-1262 dblp:conf/interspeech/KorpusikLG19 fatcat:tkebx7eebveexgydikf3gfvxle

A Survey on Spoken Language Understanding: Recent Advances and New Frontiers [article]

Libo Qin, Tianbao Xie, Wanxiang Che, Ting Liu
2021 arXiv   pre-print
Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries, which is a core component in a task-oriented dialog system.  ...  Specifically, we give a thorough review of this research field, covering different aspects including (1) new taxonomy: we provide a new perspective for SLU filed, including single model vs. joint model  ...  BERT: Pre-training of ral network structured output prediction for spoken lan- deep bidirectional transformers for language understand- guage understanding.  ... 
arXiv:2103.03095v2 fatcat:krhrfeomafd6nds2m4o5djbzby

Label-Dependencies Aware Recurrent Neural Networks [article]

Yoann Dupont and Marco Dinarelli and Isabelle Tellier
2017 arXiv   pre-print
We compare this RNN variant to all the other RNN models, Elman and Jordan RNN, LSTM and GRU, on two well-known tasks of Spoken Language Understanding (SLU).  ...  other RNNs, but also outperforms sophisticated CRF models.  ...  Evaluation Corpora for Spoken Language Understanding We evaluated our models on two tasks of Spoken Language Understanding (SLU) [25] : 5 Indeed we observed better performances when using a word window  ... 
arXiv:1706.01740v1 fatcat:nm7rb5wsevfddaecpenp6goeh4

Label-Dependencies Aware Recurrent Neural Networks [chapter]

Yoann Dupont, Marco Dinarelli, Isabelle Tellier
2018 Lecture Notes in Computer Science  
We compare this RNN variant to all the other RNN models, Elman and Jordan RNN, LSTM and GRU, on two well-known tasks of Spoken Language Understanding (SLU).  ...  other RNNs, but also outperforms sophisticated CRF models.  ...  Evaluation Corpora for Spoken Language Understanding We evaluated our models on two tasks of Spoken Language Understanding (SLU) [25] : The ATIS corpus (Air Travel Information System) [26] was collected  ... 
doi:10.1007/978-3-319-77113-7_4 fatcat:xklqvoghz5dchm35k3ynrn5pvu

Review of Research on Task-Oriented Spoken Language Understanding

Lixian Hou, Yanling Li, Chengcheng Li, Min Lin
2019 Journal of Physics, Conference Series  
Spoken language understanding(SLU) is an important function module of the dialogue system. Slot filling and intent detection are two key sub-tasks of task-oriented spoken language understanding.  ...  In recent years, the methods of joint recognition have become the mainstream methods of spoken language understanding to solve slot filling and intent detection.  ...  Introduction Spoken language understanding is a key part of the dialogue system. The performance of spoken language understanding directly affects the entire dialogue system.  ... 
doi:10.1088/1742-6596/1267/1/012023 fatcat:p7pnfjpukjh5vcegcotyuws4aa

Effective Spoken Language Labeling with Deep Recurrent Neural Networks [article]

Marco Dinarelli, Yoann Dupont, Isabelle Tellier
2017 arXiv   pre-print
In this paper, we focus on Spoken Language Understanding (SLU), the module of spoken dialog systems responsible for extracting a semantic interpretation from the user utterance.  ...  Understanding spoken language is a highly complex problem, which can be decomposed into several simpler tasks.  ...  Evaluation Tasks for Spoken Language Understanding We evaluated our models on two widely used tasks of Spoken Language Understanding (SLU) [De Mori et al., 2008] .  ... 
arXiv:1706.06896v1 fatcat:qwo2kk2eyjg4rdwg6437nin5zq

BERT for Joint Intent Classification and Slot Filling [article]

Qian Chen, Zhu Zhuo, Wen Wang
2019 arXiv   pre-print
Intent classification and slot filling are two essential tasks for natural language understanding.  ...  Recently a new language representation model, BERT (Bidirectional Encoder Representations from Transformers), facilitates pre-training deep bidirectional representations on large-scale unlabeled corpora  ...  Natural language understanding (NLU) is critical to the performance of goal-oriented spoken dialogue systems.  ... 
arXiv:1902.10909v1 fatcat:rvawg64t5retncmq336on3c7n4

Encoder-decoder with Focus-mechanism for Sequence Labelling Based Spoken Language Understanding [article]

Su Zhu, Kai Yu
2017 arXiv   pre-print
This paper investigates the framework of encoder-decoder with attention for sequence labelling based spoken language understanding.  ...  To address this limitation, we propose a novel focus mechanism for encoder-decoder framework.  ...  INTRODUCTION In a spoken dialogue system, the Spoken Language Understanding (SLU) is a key component that parses user utterances into corresponding semantic concepts.  ... 
arXiv:1608.02097v2 fatcat:3wj3krwsrzgtpmiacbp6sga3o4

Discriminative Self-training for Punctuation Prediction [article]

Qian Chen, Wen Wang, Mengzhe Chen, Qinglin Zhang
2021 arXiv   pre-print
Experimental results on the English IWSLT2011 benchmark test set and an internal Chinese spoken language dataset demonstrate that the proposed approach achieves significant improvement on punctuation prediction  ...  natural language processing applications.  ...  For unlabeled spoken language data, we use Lib-riSpeech [4] , Fisher Speech Transcripts Part 1 and Part 2 [35] for the English dataset and use internal speech transcripts without punctuation for the  ... 
arXiv:2104.10339v2 fatcat:gfnf5xxkzzej7akmebwulho7bm

Cross-Lingual Multi-Task Neural Architecture for Spoken Language Understanding

Yujiang Li, Xuemin Zhao, Weiqun Xu, Yonghong Yan
2018 Interspeech 2018  
Cross-lingual spoken language understanding (SLU) systems traditionally require machine translation services for language portability and liberation from human supervision.  ...  character and word representations, bidirectional Long Short-Term Memory and conditional random fields together, while attention-based classifier is introduced for intent determination.  ...  Introduction Cross-lingual spoken language understanding (SLU) can be achieved in a variety of ways, especially given the increasingly available machine translation (MT) services, where no human supervision  ... 
doi:10.21437/interspeech.2018-1039 dblp:conf/interspeech/LiZX018 fatcat:2hv45jpcx5dhhjhorfshfrwik4

A Self-Attentive Model with Gate Mechanism for Spoken Language Understanding

Changliang Li, Liang Li, Ji Qi
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
Spoken Language Understanding (SLU), which typically involves intent determination and slot filling, is a core component of spoken dialogue systems.  ...  This paper gives a new perspective for research on SLU.  ...  In this paper, we focus on spoken language understanding which is a core component of a spoken dialogue system. It typically involves two major tasks, intent determination and slot filling.  ... 
doi:10.18653/v1/d18-1417 dblp:conf/emnlp/LiLQ18 fatcat:25ijxbqe3nfdhfwqsupokoc2ty

Learning the Morphological and Syntactic Grammars for Named Entity Recognition

Mengtao Sun, Qiang Yang, Hao Wang, Mark Pasquine, Ibrahim A. Hameed
2022 Information  
The proposed neural network consists of a bidirectional Long Short-Term Memory (Bi-LSTM) layer to capture word-level grammars, while a bidirectional Graph Attention (Bi-GAT) layer is used to capture sentence-level  ...  The experiments were performed in four Nordic languages, which have many grammar rules. The model was named the NorG network (Nor: Nordic Languages, G: Grammar).  ...  Due to the rich morphological changes in languages, some transformations may help the model locate named entities.  ... 
doi:10.3390/info13020049 fatcat:e4ypvy7webec5lsyf5eilproky

Hybrid Neural Models For Sequence Modelling: The Best Of Three Worlds [article]

Marco Dinarelli, Loïc Grobol
2019 arXiv   pre-print
We propose a neural architecture with the main characteristics of the most successful neural models of the last years: bidirectional RNNs, encoder-decoder, and the Transformer model.  ...  for this kind of tasks.  ...  ., 2011) , and also Spoken Language Understanding (SLU) in the context of human-machine dialog systems (De Mori et al., 2008) .  ... 
arXiv:1909.07102v1 fatcat:i6wzxvdqrvaylfl6htdkmf27ki

A System for Automated Image Editing from Natural Language Commands [article]

Jacqueline Brixey, Ramesh Manuvinakurike, Nham Le, Tuan Lai, Walter Chang, Trung Bui
2018 arXiv   pre-print
This work presents the task of modifying images in an image editing program using natural language written commands.  ...  We experimented with different machine learning models and found that the LSTM, the SVM, and the bidirectional LSTM-CRF joint models are the best to detect image editing actions and associated entities  ...  BiLSTM-CRF combines a bidirectional LSTM with a CRF model.  ... 
arXiv:1812.01083v1 fatcat:vqudt5ek7rak7agdfpgwrcnxzi

SlotRefine: A Fast Non-Autoregressive Model for Joint Intent Detection and Slot Filling [article]

Di Wu, Liang Ding, Fan Lu, Jian Xie
2020 arXiv   pre-print
Slot filling and intent detection are two main tasks in spoken language understanding (SLU) system.  ...  In this paper, we propose a novel non-autoregressive model named SlotRefine for joint intent detection and slot filling.  ...  Acknowledgments We thank the anonymous reviewers for their helpful suggestions. We gratefully acknowledge the support of DuerOS department, Baidu company.  ... 
arXiv:2010.02693v2 fatcat:n5cg2bdogra5lexsodl3yxjrbe
« Previous Showing results 1 — 15 out of 303 results