Filters








17,523 Hits in 3.9 sec

Gaussian process decentralized data fusion meets transfer learning in large-scale distributed cooperative perception

Ruofei Ouyang, Bryan Kian Hsiang Low
2019 Autonomous Robots  
Deep Mixed-Category Metric Learning Yuhang He, Ming Li, Long Chen* Dual Attention Network for Product Compatibility and Function Satisfiability Analysis Hu Xu*, Sihong XIe, Shu Lei, Philip Yu Dual Deep  ...  Cheap Talk Mayada Oudah, Talal Rahwan, Tawna Crandall, Jacob Crandall* How Images Inspire Poems: Generating Classical Chinese Poetry from Images with Memory Networks Linli Xu, Liang Jiang*, Chuan Qin,  ... 
doi:10.1007/s10514-018-09826-z fatcat:67yqhwmgozccxni56rxmuapjgm

Smarter Response with Proactive Suggestion: A New Generative Neural Conversation Paradigm

Rui Yan, Dongyan Zhao
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
To address the task, we propose a novel integrated model to handle both the response generation and the suggestion generation.  ...  In this paper, we propose a new paradigm for neural generative conversations: smarter response with a suggestion is provided given the query.  ...  We propose a Deep Dual Fusion Model for the generative task. The model fuses information from dual sequences via deep interactions through cell units and gatings.  ... 
doi:10.24963/ijcai.2018/629 dblp:conf/ijcai/YanZ18 fatcat:f3owyden5bf5bmbllhyp4o3asm

Dialogue Act Recognition via CRF-Attentive Structured Network [article]

Zheqian Chen, Rongqin Yang, Zhou Zhao, Deng Cai, Xiaofei He
2017 arXiv   pre-print
We incorporate hierarchical semantic inference with memory mechanism on the utterance modeling.  ...  Specifically, we propose the hierarchical semantic inference integrated with memory mechanism on the utterance modeling.  ...  Here θ comes from the former memory enhanced deep model over utterances u and corresponding dialogue acts y.  ... 
arXiv:1711.05568v1 fatcat:kifitwaz5bbetgdue2cm33u2bq

Detecting Incongruity Between News Headline and Body Text via a Deep Hierarchical Encoder [article]

Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung
2019 arXiv   pre-print
On this dataset, we develop two neural networks with hierarchical architectures that model a complex textual representation of news articles and measure the incongruity between the headline and the body  ...  This research introduces million-scale pairs of news headline and body text dataset with incongruity label, which can uniquely be utilized for detecting news stories with misleading headlines.  ...  We compared our hierarchical deep learning approaches (i.e., AHDE, HRE) with feature-based approaches and standard deep learning models.  ... 
arXiv:1811.07066v2 fatcat:gmll7acyrfhbhh4qrw2fspwrpm

Working memory inspired hierarchical video decomposition with transformative representations [article]

Binjie Qin, Haohao Mao, Ruipeng Zhang, Yueqi Zhu, Song Ding, Xu Chen
2022 arXiv   pre-print
To solve these problems, this study is the first to introduce a flexible visual working memory model in video decomposition tasks to provide interpretable and high-performance hierarchical deep architecture  ...  Then, patch recurrent convolutional LSTM networks with a backprojection module embody unstructured random representations of the control layer in working memory, recurrently projecting spatiotemporally  ...  CONCLUSION AND DISCUSSION Inspired by a flexible working memory model, we proposed dual-stage deep video decomposition networks with transformative representation hierarchy between multiscale patch recurrent  ... 
arXiv:2204.10105v3 fatcat:ifzpeay2qjfvbaznwruwc4dz5m

Detecting Incongruity between News Headline and Body Text via a Deep Hierarchical Encoder

Seunghyun Yoon, Kunwoo Park, Joongbo Shin, Hongjun Lim, Seungpil Won, Meeyoung Cha, Kyomin Jung
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
On this dataset, we develop two neural networks with hierarchical architectures that model a complex textual representation of news articles and measure the incongruity between the headline and the body  ...  This research introduces million-scale pairs of news headline and body text dataset with incongruity label, which can uniquely be utilized for detecting news stories with misleading headlines.  ...  We compared our hierarchical deep learning approaches (i.e., AHDE, HRE) with feature-based approaches and standard deep learning models.  ... 
doi:10.1609/aaai.v33i01.3301791 fatcat:mobr2ymj7jcchm3qlpk7hab7nm

Dual Memory Network Model for Biased Product Review Classification

Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, Chu-Ren Huang
2018 Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis  
In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks.  ...  Compared to state-of-theart unified prediction models, the evaluations on three benchmark datasets, IMDB, Yelp13, and Yelp14, show that our dual learning model gives performance gain of 0.6%, 1.2%, and  ...  In the proposed Dual User and Product Memory Network (DUPMN) model, we first build a hierarchical LSTM (Hochreiter and Schmidhuber, 1997) model to generate document representations.  ... 
doi:10.18653/v1/w18-6220 dblp:conf/wassa/LongMLXH18 fatcat:iqyddaaqkbh6rbk4n42yso2tae

Dual Memory Network Model for Biased Product Review Classification [article]

Yunfei Long, Mingyu Ma, Qin Lu, Rong Xiang, Chu-Ren Huang
2018 arXiv   pre-print
In this work, we propose a dual user and product memory network (DUPMN) model to learn user profiles and product reviews using separate memory networks.  ...  Compared to state-of-the-art unified prediction models, the evaluations on three benchmark datasets, IMDB, Yelp13, and Yelp14, show that our dual learning model gives performance gain of 0.6%, 1.2%, and  ...  In the proposed Dual User and Product Memory Network (DUPMN) model, we first build a hierarchical LSTM (Hochreiter and Schmidhuber, 1997) model to generate document representations.  ... 
arXiv:1809.05807v1 fatcat:c2bmyhgq3bgv3nvxuh2zhhlkbq

Active Long Term Memory Networks [article]

Tommaso Furlanello, Jiaping Zhao, Andrew M. Saxe, Laurent Itti, Bosco S. Tjan
2016 arXiv   pre-print
This paper introduces the Active Long Term Memory Networks (A-LTM), a model of sequential multi-task deep learning that is able to maintain previously learned association between sensory input and behavioral  ...  We re-frame the McClelland's seminal Hippocampal theory with respect to Catastrophic Inference (CI) behavior exhibited by modern deep architectures trained with back-propagation and inhomogeneous sampling  ...  The A-LTM Model We approach the problem of learning with a sequence of input-outputs that exhibits transitions in its latent factors using a dual system.  ... 
arXiv:1606.02355v1 fatcat:c4sials7qfbe5fajedk65rgv4i

Guest Editorial Special Issue on Deep Integration of Artificial Intelligence and Data Science for Process Manufacturing

Feng Qian, Yaochu Jin, S. Joe Qin, Kai Sundmacher
2021 IEEE Transactions on Neural Networks and Learning Systems  
sensor development," by Feng et al., proposes a dual attention-based encoder-decoder for soft sensors based on the long short-term memory network.  ...  In order to capture the sequential dependence among different variables in process manufacturing, the article "Dual attention-based encoder-decoder: A customized sequence-to-sequence learning for soft  ... 
doi:10.1109/tnnls.2021.3092896 fatcat:kta54yqaqvd7fcbyvfaueij36e

Improving source code suggestion with code embedding and enhanced convolutional long short‐term memory

Yasir Hussain, Zhiqiu Huang, Yu Zhou
2021 IET Software  
First, DeepSN uses an enhanced hierarchical convolutional neural network combined with code-embedding to automatically extract the top-notch features of the source code and to learn useful semantic information  ...  Next, the source code's long and short-term context dependencies are captured by using long shortterm memory.  ...  Then each sequence is subdivided into multiple sequences with a fixed size context (τ) 20 by employing a sliding window approach.  ... 
doi:10.1049/sfw2.12017 fatcat:tbk4m3ycarbelnpoetnjpw5jwy

Generating Emotional Controllable Response Based On Multi-Task and Dual Attention Framework

Weiran Xu, Xiusen Gu, Guang Chen
2019 IEEE Access  
We also combine the MTDA framework with state of the art generative models to train emotional generation systems.  ...  In this paper, we propose the multi-task and dual attentions (MTDA) framework for generating an emotional response.  ...  and external memory.  ... 
doi:10.1109/access.2019.2922336 fatcat:f5zfamfhozfhxjt6oac77p5wte

HARC-New Hybrid Method with Hierarchical Attention Based Bidirectional Recurrent Neural Network with Dilated Convolutional Neural Network to Recognize Multilabel Emotions from Text

Md Shofiqul Islam, Mst Sunjida Sultana, Mr Uttam Kumar, Jubayer Al Mahmud, SM Jahidul Islam
2021 Jurnal Ilmiah Teknik Elektro Komputer dan Informatika  
In this analysis, the proposed new hybrid deep learning HARC model architecture for the recognition of multilevel textual sentiment that combines hierarchical attention with Convolutional Neural Network  ...  The main goals of this proposed technique are to use deep learning approaches to identify multilevel textual sentiment with far less time and more accurate and simple network structure training for better  ...  When dealing with long-range functions, this model is ineffective. The LSTM [12] method was created with improved memory usage and recall control in mind.  ... 
doi:10.26555/jiteki.v7i1.20550 fatcat:xnes4y6p2fawfcfiuxecfcu2ru

PredCNN: Predictive Learning with Cascade Convolutions

Ziru Xu, Yunbo Wang, Mingsheng Long, Jianmin Wang
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
Mainstream recurrent models suffer from huge memory usage and computation cost, while convolutional models are unable to effectively capture the temporal dependencies between consecutive video frames.  ...  training speed and lower memory footprint.  ...  spatial appearances and temporal variations simultaneously with dual memory LSTMs.  ... 
doi:10.24963/ijcai.2018/408 dblp:conf/ijcai/XuWLW18 fatcat:5ccca7yghfhxlppl4js754h4cm

Fine-Grained Named Entity Recognition Using a Multi-Stacked Feature Fusion and Dual-Stacked Output in Korean

Hongjin Kim, Harksoo Kim
2021 Applied Sciences  
The proposed model is based on multi-stacked long short-term memories (LSTMs) with a multi-stacked feature fusion layer for acquiring multilevel embeddings and a dual-stacked output layer for predicting  ...  Named entity recognition (NER) is a natural language processing task to identify spans that mention named entities and to annotate them with predefined named entity classes.  ...  automatically generates NE sequences by concatenating the characters with B tags and successive I tags.  ... 
doi:10.3390/app112210795 fatcat:lbsmshrbi5e5zknexg7njixyqi
« Previous Showing results 1 — 15 out of 17,523 results