Filters








2,904 Hits in 6.4 sec

Space Efficient Context Encoding for Non-Task-Oriented Dialogue Generation with Graph Attention Transformer

Fabian Galetzka, Jewgeni Rose, David Schlangen, Jens Lehmann
2021 Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)   unpublished
To improve the coherence and knowledge retrieval capabilities of non-task-oriented dialogue systems, recent Transformer-based models aim to integrate fixed background context.  ...  Further, models trained with our proposed context encoding generate dialogues that are judged to be more comprehensive and interesting.  ...  Acknowledgements We thank our colleagues from the Digital Assistant for Mobility team at the Volkswagen Group Innovation Europe for their support in preparing the human evaluation.  ... 
doi:10.18653/v1/2021.acl-long.546 fatcat:lyqg4jiujjdelh3c34dj3pmwge

Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey [article]

Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, Vinay Adiga, Erik Cambria
2021 arXiv   pre-print
, Neural Networks, CNN, RNN, Hierarchical Recurrent Encoder-Decoder, Memory Networks, Attention, Transformer, Pointer Net, CopyNet, Reinforcement Learning, GANs, Knowledge Graph, Survey, Review  ...  Furthermore, we comprehensively review the evaluation methods and datasets for dialogue systems to pave the way for future research.  ...  Henderson et al. (2019b) built a transformer-based response retrieval model for task-oriented dialogue systems.  ... 
arXiv:2105.04387v4 fatcat:stperoq73rgyja5b7zcfysjh5q

Modeling ASR Ambiguity for Dialogue State Tracking Using Word Confusion Networks [article]

Vaishali Pal, Fabien Guillot, Manish Shrivastava, Jean-Michel Renders, Laurent Besacier
2020 arXiv   pre-print
We encode the 2-dimensional confnet into a 1-dimensional sequence of embeddings using an attentional confusion network encoder which can be used with any DST system.  ...  Our confnet encoder is plugged into the state-of-the-art 'Global-locally Self-Attentive Dialogue State Tacker' (GLAD) model for DST and obtains significant improvements in both accuracy and inference time  ...  Introduction Spoken task-oriented dialogue systems guide the user to complete a certain task through speech interaction.  ... 
arXiv:2002.00768v2 fatcat:edz2xna32vee5eloosmd4v2qjq

Modeling ASR Ambiguity for Neural Dialogue State Tracking

Vaishali Pal, Fabien Guillot, Manish Shrivastava, Jean-Michel Renders, Laurent Besacier
2020 Interspeech 2020  
Our confnet encoder is plugged into the 'Global-locally Self-Attentive Dialogue State Tacker' (GLAD) model for DST and obtains significant improvements in both accuracy and inference time compared to using  ...  However, ASR graphs, such as confusion networks (confnets), provide a compact representation of a richer hypothesis space than a top-N ASR list.  ...  Word Confusion Network for DST Confusion Network Encoder Inspired from [15] , we use a word confusion network encoder to transform the graph to a representation space which can be used with any dialogue  ... 
doi:10.21437/interspeech.2020-1783 dblp:conf/interspeech/PalG0RB20 fatcat:fzo3p72klbh57hllfwwmffnmii

Neural Approaches to Conversational AI

Jianfeng Gao, Michel Galley, Lihong Li
2018 The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval - SIGIR '18  
us/research/publication/neural-approaches-toconversational-ai/ We thank Lihong Li, Bill Dolan and Yun-Nung (Vivian) Chen for contributing slides. 2  ...  / ELMo Embedding vectors One for each word Context vectorsℎ ,1 at low level One for each word with its context BiLSTM Context vectors ℎ , at high level One for each word with its context BiLSTM  ...  Language Processing" by Yih, He and Gao. study: ReasoNet with Shared Memory • Shared memory (M) encodes task-specific knowledge • Long-term memory: encode KB for answering all questions in QA on KB •  ... 
doi:10.1145/3209978.3210183 dblp:conf/sigir/GaoG018 fatcat:pnhrb5jgdfgnxac3hxy52a65pm

Multi-Domain Dialogue State Tracking based on State Graph [article]

Yan Zeng, Jian-Yun Nie
2020 arXiv   pre-print
The state graph, encoded with relational-GCN, is fused into the Transformer encoder. Experimental results show that our approach achieves a new state of the art on the task while remaining efficient.  ...  Existing approaches usually concatenate previous dialogue state with dialogue history as the input to a bi-directional Transformer encoder.  ...  Introduction Dialogue state tracking (DST) is a core component in task-oriented dialogue systems.  ... 
arXiv:2010.11137v1 fatcat:xvahm3x4dfbg7dydlftuhdn3n4

Advances in Multi-turn Dialogue Comprehension: A Survey [article]

Zhuosheng Zhang, Hai Zhao
2021 arXiv   pre-print
In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.  ...  Among these studies, the fundamental yet challenging type of task is dialogue comprehension whose role is to teach the machines to read and comprehend the dialogue context before responding.  ...  Among the dialogue comprehension studies, the basic technique is dialogue modeling which focuses on how to encode the dialogue context effectively and efficiently to solve the tasks, thus we regard dialogue  ... 
arXiv:2110.04984v2 fatcat:4i4svd2oyvdhhasqx2ungtppue

Advances in Multi-turn Dialogue Comprehension: A Survey [article]

Zhuosheng Zhang, Hai Zhao
2021 arXiv   pre-print
In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.  ...  Among these studies, the fundamental yet challenging type of task is dialogue comprehension whose role is to teach the machines to read and comprehend the dialogue context before responding.  ...  Among the dialogue comprehension studies, the basic technique is dialogue modeling which focuses on how to encode the dialogue context effectively and efficiently to solve the tasks, thus we regard dialogue  ... 
arXiv:2103.03125v2 fatcat:62p6ase66jbhnhm77xlp5ulvre

Efficient Context and Schema Fusion Networks for Multi-Domain Dialogue State Tracking [article]

Su Zhu, Jieyu Li, Lu Chen, Kai Yu
2020 arXiv   pre-print
To encode the dialogue context efficiently, we utilize the previous dialogue state (predicted) and the current dialogue utterance as the input for DST.  ...  In this paper, a novel context and schema fusion network is proposed to encode the dialogue context and schema graph by using internal and external attention mechanisms.  ...  Acknowledgments We thank the anonymous reviewers for their thoughtful comments.  ... 
arXiv:2004.03386v4 fatcat:l6enipvnfjdm7po3lbkptav4oy

Semantic Representation for Dialogue Modeling [article]

Xuefeng Bai, Yulong Chen, Linfeng Song, Yue Zhang
2021 arXiv   pre-print
Experimental results on both dialogue understanding and response generation tasks show the superiority of our model.  ...  We develop an algorithm to construct dialogue-level AMR graphs from sentence-level AMRs and explore two ways to incorporate AMRs into dialogue systems.  ...  with a dual-attention mechanism (Song et al., A sequence encoder (SeqEncoder) transforms a 2019a).  ... 
arXiv:2105.10188v2 fatcat:unzjukziljbc7lqs4w722gczh4

UniConv: A Unified Conversational Neural Architecture for Multi-domain Task-oriented Dialogues [article]

Hung Le, Doyen Sahoo, Chenghao Liu, Nancy F. Chen, Steven C.H. Hoi
2020 arXiv   pre-print
Building an end-to-end conversational agent for multi-domain task-oriented dialogues has been an open challenge for two main reasons.  ...  Second, the dialogue agent must also process various types of information across domains, including dialogue context, dialogue states, and database, to generate natural responses to users.  ...  Acknowledgments We thank all reviewers for their insightful feedback to the manuscript of this paper.  ... 
arXiv:2004.14307v2 fatcat:resbk6yaercebanfrbhpypmvmq

Toward Interpretability of Dual-Encoder Models for Dialogue Response Suggestions [article]

Yitong Li, Dianqi Li, Sushant Prakash, Peng Wang
2020 arXiv   pre-print
We compare the proposed model with existing methods for the dialogue response task on two public datasets (Persona and Ubuntu).  ...  We present an attentive dual encoder model that includes an attention mechanism on top of the extracted word-level features from two encoders, one for context and one for label respectively.  ...  [17] focuses on dynamic scene navigation with MI and [25] applies MI on the graph aligning task.  ... 
arXiv:2003.04998v1 fatcat:krbawitty5aslfrfo3kvtukgvi

Multi-turn Dialogue Reading Comprehension with Pivot Turns and Knowledge [article]

Zhuosheng Zhang, Junlong Li, Hai Zhao
2021 arXiv   pre-print
We propose a pivot-oriented deep selection model (PoDS) on top of the Transformer-based language models for dialogue comprehension.  ...  Besides, knowledge items related to the dialogue context are extracted from a knowledge graph as external knowledge.  ...  CONCLUSION In this work, we proposed a pivot-oriented deep selection model using BERT as the encoder with pivot-aware contextualized attention mechanisms for the multi-turn response selection task.  ... 
arXiv:2102.05474v1 fatcat:mgze2s6ypbf6zoucrxqmatnroq

A Survey of Knowledge-Enhanced Text Generation [article]

Wenhao Yu, Chenguang Zhu, Zaitang Li, Zhiting Hu, Qingyun Wang, Heng Ji, Meng Jiang
2022 arXiv   pre-print
The goal of text generation is to make machines express in human language. It is one of the most important yet challenging tasks in natural language processing (NLP).  ...  The main content includes two parts: (i) general methods and architectures for integrating knowledge into text generation; (ii) specific techniques and applications according to different forms of knowledge  ...  For example, MemNNs are widely used for encoding dialogue history in task-oriented dialogue systems [102, 131] .  ... 
arXiv:2010.04389v3 fatcat:vzdtlz4j65g2va7gwkbmzyxkhq

Causal-aware Safe Policy Improvement for Task-oriented dialogue [article]

Govardana Sachithanandam Ramachandran, Kazuma Hashimoto, Caiming Xiong
2021 arXiv   pre-print
We demonstrate the effectiveness of this framework on a dialogue-context-to-text Generation and end-to-end dialogue task of the Multiwoz2.0 dataset.  ...  To this end, we propose a batch RL framework for task oriented dialogue policy learning: causal aware safe policy improvement (CASPI).  ...  BART use as a standard encoder decoder transformer architecture with a bidirectional encoder and an autoregressive decoder. It is pre-trained on the task of denoising corrupt documents.  ... 
arXiv:2103.06370v1 fatcat:hxalyouwkbffrbdvrvpwvg5evi
« Previous Showing results 1 — 15 out of 2,904 results