A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Improving Conversation-Context Language Models with Multiple Spoken Language Understanding Models
2019
Interspeech 2019
We expect that the SLU models will help the CCLMs to properly understand semantic meanings of long-range interactive contexts and to fully leverage them for estimating a next utterance. ...
Our experiments on contact center dialogue ASR tasks demonstrate that SLU-assisted CCLMs combined with three types of SLU models can yield ASR performance improvements. ...
Note that any SLU labels were not annotated to the manual transcriptions. In the training set, on average one dialogue included about 121 utterances and one utterance included about 10 words. ...
doi:10.21437/interspeech.2019-1534
dblp:conf/interspeech/MasumuraTAKOKA19
fatcat:4pyoep7klzcprhjcktr7otvqoa
A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural Networks
[article]
2018
arXiv
pre-print
We evaluate this method on the Switchboard Dialogue Act corpus, and our results show that the consideration of the preceding utterances as a context of the current utterance improves dialogue act detection ...
Nevertheless, previous models of dialogue act classification work on the utterance-level and only very few consider context. ...
Table 3 shows the results of the proposed model with several setups, first without the context, then with one, two, and so on preceding utterances in the context. ...
arXiv:1805.06280v1
fatcat:4xgn6zppe5apdosj2m7e7jcuny
An Efficient Approach to Encoding Context for Spoken Language Understanding
[article]
2018
arXiv
pre-print
In our experiments, we demonstrate the effectiveness of our approach on dialogues from two domains. ...
State of the art approaches to SLU use memory networks to encode context by processing multiple utterances from the dialogue at each turn, resulting in significant trade-offs between accuracy and computational ...
Our representation of dialogue context is similar to those used in dialogue state tracking models [17, 18, 19] , thus enabling the sharing of context representation between SLU and DST. ...
arXiv:1807.00267v1
fatcat:7fzeh4zz6ndjtfb75bji3fiine
Role Play Dialogue Aware Language Models Based on Conditional Hierarchical Recurrent Encoder-Decoder
2018
Interspeech 2018
We propose role play dialogue-aware language models (RPDA-LMs) that can leverage interactive contexts in role play multiturn dialogues for estimating the generative probability of words. ...
In addition, we verify the effectiveness of explicitly taking interactive contexts into consideration. ...
One dialogue means one telephone call between one operator and one customer. Each dialogue was separately recorded and the data set consists of 2,636 dialogues. ...
doi:10.21437/interspeech.2018-2185
dblp:conf/interspeech/MasumuraTAMA18
fatcat:weuxxsutcrbhveygfhbwinhwxi
An Efficient Approach to Encoding Context for Spoken Language Understanding
2018
Interspeech 2018
In our experiments, we demonstrate the effectiveness of our approach on dialogues from two domains. ...
State of the art approaches to SLU use memory networks to encode context by processing multiple utterances from the dialogue at each turn, resulting in significant trade-offs between accuracy and computational ...
Our representation of dialogue context is similar to those used in dialogue state tracking models [17, 18, 19] , thus enabling the sharing of context representation between SLU and DST. ...
doi:10.21437/interspeech.2018-2403
dblp:conf/interspeech/GuptaRH18
fatcat:rwgmdh7pwffdxgh5adoljoasem
Dialogue History Matters! Personalized Response Selectionin Multi-turn Retrieval-based Chatbots
[article]
2021
arXiv
pre-print
However, in real-place conversation scenarios, whether a response candidate is suitable not only counts on the given dialogue context but also other backgrounds, e.g., wording habits, user-specific dialogue ...
Existing multi-turn context-response matching methods mainly concentrate on obtaining multi-level and multi-dimension representations and better interactions between context utterances and response. ...
ACKNOWLEDGMENTS We would like to thank the efforts of anonymous reviewers for improving this paper. ...
arXiv:2103.09534v1
fatcat:oy6xk7jzdzgrhnwmmx753inuhu
Small Changes Make Big Differences: Improving Multi-turn Response Selection in Dialogue Systems via Fine-Grained Contrastive Learning
[article]
2021
arXiv
pre-print
The sequence representation plays a key role in the learning of matching degree between the dialogue context and the response. ...
However, we observe that different context-response pairs sharing the same context always have a greater similarity in the sequence representations calculated by PLMs, which makes it hard to distinguish ...
The representations of a positive dialogue may be close to the representation of another negative dialogue with a different context, as is shown in Figure 2 . ...
arXiv:2111.10154v2
fatcat:lycakv6bmndalhy3bsvyox2ibm
Hierarchical Knowledge Distillation for Dialogue Sequence Labeling
[article]
2021
arXiv
pre-print
utterance-level and dialogue-level contexts trained in the teacher model by training the model to mimic the teacher model's output in each level. ...
Accurate labeling is often realized by a hierarchically-structured large model consisting of utterance-level and dialogue-level networks that capture the contexts within an utterance and between utterances ...
T n is the number of utterances in the n-th dialogue. Note that o n t is a one-hot vector. ...
arXiv:2111.10957v1
fatcat:ohanql6srfd3bc3wkmga72fcka
Unsupervised Domain Adaptation for Dialogue Sequence Labeling Based on Hierarchical Adversarial Training
2020
Interspeech 2020
In this paper, we focus on the utterance-level sequence labeling of hierarchical recurrent neural networks specialized for conversation documents [12] . ...
Experiments on Japanese simulated contact center dialogue datasets demonstrate the effectiveness of the proposed method. ...
In CIDAN, the hidden representation of each utterance is embedded into one vector. ...
doi:10.21437/interspeech.2020-2010
dblp:conf/interspeech/OrihashiITM20
fatcat:xlkbbfo7jnf77eripy42h65hgu
Multi-turn Dialogue Model Based on the Improved Hierarchical Recurrent Attention Network
2021
International Journal for Engineering Modelling
However, for complex conversations, the traditional attention-based RNN does not fully understand the context, which results in attention to the wrong context that generates irrelevant responses. ...
At present, HRAN, one of the most advanced models for multi-turn dialogue problems, uses a hierarchical recurrent encoder-decoder combined with a hierarchical attention mechanism. ...
Finally, the utterance level attention emphasizes the important utterances of the context and encodes them into a context vector. ...
doi:10.31534/engmod.2021.2.ri.02d
fatcat:mrbjmnrqhfb4fo2ohmtvyneehy
Enhance word representation for out-of-vocabulary on Ubuntu dialogue corpus
[article]
2018
arXiv
pre-print
In addition, we investigated the performance impact of end-of-utterance and end-of-turn token tags. ...
One challenge of Ubuntu dialogue corpus is the large number of out-of-vocabulary words. ...
THE ROLES OF UTTERANCE AND TURN TAGS There are two special token tags ( eou and eot ) on ubuntu dialogue corpus. eot tag is used to denote the end of a user's turn within the context and eou tag is used ...
arXiv:1802.02614v2
fatcat:5o35nhut6vfudli63dvpoozh4q
Dialogue Act Sequence Labeling using Hierarchical encoder with CRF
[article]
2017
arXiv
pre-print
Dialogue Act recognition associate dialogue acts (i.e., semantic labels) to utterances in a conversation. ...
and utterances, an important consideration of natural dialogue. ...
each utterance in the conversation, based on the representations of the previous encoder. ...
arXiv:1709.04250v2
fatcat:vosaokal2fgujko3rmquinrfk4
DialogBERT: Discourse-Aware Response Generation via Learning to Recover and Rank Utterances
[article]
2021
arXiv
pre-print
However, existing methods usually view the dialogue context as a linear sequence of tokens and learn to generate the next word through token-level self-attention. ...
Experiments on three multi-turn conversation datasets show that our approach remarkably outperforms the baselines, such as BART and DialoGPT, in terms of quantitative evaluation. ...
Acknowledgments The authors would thank Prof. Kyunghyun Cho at New York University for his valuable comments on this project. This work was done when the first author was visiting NAVER AI Lab. ...
arXiv:2012.01775v2
fatcat:mv52ma2xvfa45ovahlhlt5wjna
Knowledge Augmented BERT Mutual Network in Multi-turn Spoken Dialogues
[article]
2022
arXiv
pre-print
However, they lack the capability of modeling multi-turn dynamics within a dialogue particularly in long-term slot contexts. ...
Modern spoken language understanding (SLU) systems rely on sophisticated semantic notions revealed in single utterances to detect intents and slots. ...
Given the token-level representations for each word h n i in the utterance x n , attention weights are assigned to reveal the relevance of each knowledge triple under current contexts. ...
arXiv:2202.11299v1
fatcat:cgexfdzsuzd7veycgl6kydqcsi
Changing the Level of Directness in Dialogue using Dialogue Vector Models and Recurrent Neural Networks
2018
Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
In cooperative dialogues, identifying the intent of ones conversation partner and acting accordingly is of great importance. ...
In this endeavour, we employ dialogue vector models and recurrent neural networks. ...
DVMs are representations of sentences as vectors that captures their semantic meaning in the dialogue context. ...
doi:10.18653/v1/w18-5002
dblp:conf/sigdial/PragstU18
fatcat:scxnhuewcbfcrasjfel2eh4gb4
« Previous
Showing results 1 — 15 out of 7,622 results