Filters








21,811 Hits in 3.9 sec

An efficient sentiment analysis using topic model based optimized recurrent neural network

Nikhlesh Pathik, Pragya Shukla
2021 International Journal on Smart Sensing and Intelligent Systems  
In recent years, topic modeling and deep neural network-based methods have attracted much attention in sentiment analysis of online reviews.  ...  Latent Dirichlet allocation applied for aspect extraction and two-layer bi-directional long short-term memory (LSTM) for sentiment classification.  ...  An attention-based LSTM model is used for word sequence in a given document with a latent topic modeling layer.  ... 
doi:10.21307/ijssis-2021-011 fatcat:qdj5qfb7crbbfnvezvejkkgm4q

A Dual-Attention Hierarchical Recurrent Neural Network for Dialogue Act Classification [article]

Ruizhe Li, Chenghua Lin, Matthew Collinson, Xiao Li, Guanyi Chen
2019 arXiv   pre-print
In this paper, we propose a dual-attention hierarchical recurrent neural network for DA classification.  ...  With a novel dual task-specific attention mechanism, our model is able, for utterances, to capture information about both DAs and topics, as well as information about the interactions between them.  ...  DA classification ; Bi-LSTM-CRF 7 : A hierarchical Bi-LSTM with a CRF to classify DAs (Kumar et al., 2018) ; CRF-ASN: An attentive structured network with a CRF for DA classification (Chen et al., 2018  ... 
arXiv:1810.09154v3 fatcat:6c7fdlmocfa7bclvibly7mrbom

A Dual-Attention Hierarchical Recurrent Neural Network for Dialogue Act Classification

Ruizhe Li, Chenghua Lin, Matthew Collinson, Xiao Li, Guanyi Chen
2019 Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)  
With a novel dual task-specific attention mechanism, our model is able, for utterances, to capture information about both DAs and topics, as well as information about the interactions between them.  ...  Experimental results show that by modelling topic as an auxiliary task, our model can significantly improve DA classification, yielding better or comparable performance to the state-of-the-art method on  ...  DA classification ; Bi-LSTM-CRF 7 : A hierarchical Bi-LSTM with a CRF to classify DAs (Kumar et al., 2018) ; CRF-ASN: An attentive structured network with a CRF for DA classification (Chen et al., 2018  ... 
doi:10.18653/v1/k19-1036 dblp:conf/conll/LiLCLC19 fatcat:5dk252qytjawbexuvtkutrsrdu

Tourism Review Sentiment Classification Using a Bidirectional Recurrent Neural Network with an Attention Mechanism and Topic-Enriched Word Vectors

Qin Li, Shaobo Li, Jie Hu, Sen Zhang, Jianjun Hu
2018 Sustainability  
In this work, we propose a bidirectional gated recurrent unit neural network model (BiGRULA) for sentiment analysis by combining a topic model (lda2vec) and an attention mechanism.  ...  Furthermore, we applied our model to hotel review data analysis, which allows us to get more coherent topics from these reviews and achieve good performance in sentiment classification.  ...  BiGRU In our model, BiGRU, a bi-directional recurrent neural network model, is used to map a sequence of word vectors of the document to sentiment categories.  ... 
doi:10.3390/su10093313 fatcat:ipqqqpqv6vb4hkdqf44ochhtri

Temporal Attention-Gated Model for Robust Sequence Classification [article]

Wenjie Pei, Tadas Baltrušaitis, David M.J. Tax, Louis-Philippe Morency
2017 arXiv   pre-print
In this paper, we present the Temporal Attention-Gated Model (TAGM) which integrates ideas from attention models and gated recurrent networks to better deal with noisy or unsegmented sequences.  ...  Specifically, we extend the concept of attention model to measure the relevance of each observation (time step) of a sequence.  ...  We also investigate the bi-directional variant of our TAGM model (referred as Bi-TAGM), which employs the bi-directional recurrent configuration in the recurrent attention-gated units.  ... 
arXiv:1612.00385v2 fatcat:wyz7xw2p5naxpivviru22ysdl4

Combine Convolution with Recurrent Networks for Text Classification [article]

Shengfei Lyu, Jiaqi Liu
2020 arXiv   pre-print
Meanwhile, we use a bi-directional RNN to process each word and employ a neural tensor layer that fuses forward and backward hidden states to get word representations.  ...  Convolutional neural network (CNN) and recurrent neural network (RNN) are two popular architectures used in text classification.  ...  We compare CNN, Bi-GRU and our model with MV-RNN on MR dataset and experimental results show that convolution-based and recurrent-based model has better performance.  ... 
arXiv:2006.15795v1 fatcat:xx3m64jhmfbt7lz262v7gwgsyq

Temporal Attention-Gated Model for Robust Sequence Classification

Wenjie Pei, Tadas Baltrusaitis, David M. J. Tax, Louis-Philippe Morency
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
In this paper, we present the Temporal Attention-Gated Model (TAGM) which integrates ideas from attention models and gated recurrent networks to better deal with noisy or unsegmented sequences.  ...  Specifically, we extend the concept of attention model to measure the relevance of each observation (time step) of a sequence.  ...  We also investigate the bi-directional variant of our TAGM model (referred as Bi-TAGM), which employs the bi-directional recurrent configuration in the recurrent attention-gated units.  ... 
doi:10.1109/cvpr.2017.94 dblp:conf/cvpr/PeiBTM17 fatcat:brdpjt6mivczhmrylcmkiwej5u

Neural Topic Model with Attention for Supervised Learning

Xinyi Wang, Yi Yang
2020 International Conference on Artificial Intelligence and Statistics  
This paper presents Topic Attention Model (TAM) 1 , a supervised neural topic model that integrates with a recurrent neural network.  ...  We design a novel way to utilize document-specific topic proportions and global topic vectors learned from neural topic model in the attention mechanism.  ...  We adopt Bi-directional Gated Recurrent Unit (GRU) to encode the embedding sequence. See (Cho et al., 2014) for a detailed mathematical description for the GRU gating mechanism.  ... 
dblp:conf/aistats/WangY20 fatcat:a4vkfc6amzfzjha4aixe5m4o6y

Topical Stance Detection for Twitter: A Two-Phase LSTM Model Using Attention [article]

Kuntal Dey, Ritvik Shrivastava, Saroj Kaushik
2018 arXiv   pre-print
Using the concept of attention, we develop a two-phase solution. In the first phase, we classify subjectivity - whether a given tweet is neutral or subjective with respect to the given topic.  ...  We propose a Long Short-Term memory (LSTM) based deep neural network for each phase, and embed attention at each of the phases.  ...  Our model is a two-phase one. At each phase, there are two components -a bidirectional LSTM and an attention mechanism. The bi-directional LSTM is used for feature encoding.  ... 
arXiv:1801.03032v1 fatcat:vwudodn6ybglhldddcix5z6jc4

Bi-level Attention Model with Topic Information for Classification

Hongtao Liu, Qimin Qian
2021 IEEE Access  
[27] used LDA model to form document based distribution on the topic of each word, and applied it to the recurrent neural network language model. M. Syamala et al.  ...  (Bi-LSTM) network modeling is based on model documents represented by sentences.  ... 
doi:10.1109/access.2021.3058016 fatcat:zju7lyb7hvhahcvoeinux6gshm

A Hybrid Bidirectional Recurrent Convolutional Neural Network Attention-Based Model for Text Classification

Jin Zheng, Limin Zheng
2019 IEEE Access  
In this paper, we propose a hybrid bidirectional recurrent convolutional neural network attention-based model to address this issue, which named BRCAN.  ...  In our model, we apply word2vec to generate word vectors automatically and a bidirectional recurrent structure to capture contextual information and long-term dependence of sentences.  ...  CONCLUSION In this paper, we propose a hybrid bidirectional recurrent convolutional neural network attention-based model (BRCAN), which combines the Bi-LSTM and CNN effectively with the help of the word2vec  ... 
doi:10.1109/access.2019.2932619 fatcat:5il6p3bcifbv5hzizvzzomxsp4

Recurrent Neural Networks and its variants in Remaining Useful Life prediction

Youdao Wang, Sri Addepalli, Yifan Zhao
2020 IFAC-PapersOnLine  
Data-driven techniques, especially artificial intelligence (AI) based deep learning (DL) techniques, have attracted more and more attention in the manufacturing sector because of the rapid growth of the  ...  Abstract: Data-driven techniques, especially artificial intelligence (AI) based deep learning (DL) techniques, have attracted more and more attention in the manufacturing sector because of the rapid growth  ...  A Bi-directional LSTM structure(Cui, Ke and Wang, 2018) Gated Recurrent Unit (GRU) GRU is the newer generation of RNNs and it looks very similar to LSTM as demonstrated in Figure 4 .  ... 
doi:10.1016/j.ifacol.2020.11.022 fatcat:43teizfslbcxjcet5tigetrxym

Sentiment Analysis Using Gated Recurrent Neural Networks

Sharat Sachin, Abha Tripathi, Navya Mahajan, Shivani Aggarwal, Preeti Nagrath
2020 SN Computer Science  
We have implemented the baseline models for LSTM, GRU and Bi-LSTM and Bi-GRU on an Amazon review dataset.  ...  Traditional approaches to sentiment analysis use the tally or recurrence of words in a text which are allotted sentiment values by some expert.  ...  [34] worked on bi-gated RNN model with integration of the attention mechanism to create a new model named as ABAE-Bi-GRU.  ... 
doi:10.1007/s42979-020-0076-y fatcat:xnfzd4xpzzbx3lyg3bppj3jnrm

Multimodal Recurrent Model with Attention for Automated Radiology Report Generation [chapter]

Yuan Xue, Tao Xu, L. Rodney Long, Zhiyun Xue, Sameer Antani, George R. Thoma, Xiaolei Huang
2018 Lecture Notes in Computer Science  
Chest X-rays from the Open-i image collection show that our proposed recurrent attention model achieves significant improvements over baseline models according to multiple evaluation metrics.  ...  The proposed model incorporates the Convolutional Neural Networks (CNNs) with the Long Short-Term Memory (LSTM) in a recurrent way.  ...  The first one is a Bi-directional Long Short-Term Memory (Bi-LSTM) [6] which can encode better context information than the conventional one-directional LSTM.  ... 
doi:10.1007/978-3-030-00928-1_52 fatcat:hxsh7dto6zfpfcipjwxa4bvip4

Character-Based Text Classification using Top Down Semantic Model for Sentence Representation [article]

Zhenzhou Wu and Xin Zheng and Daniel Dahlmeier
2017 arXiv   pre-print
Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation  ...  the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification.  ...  Bi-Directional LSTM We used Bi-directional LSTM (BiLSTM) (Graves et al., 2013a) as the recurrent unit.  ... 
arXiv:1705.10586v1 fatcat:6w4rnp3oyvcfvetbljyjjuizzu
« Previous Showing results 1 — 15 out of 21,811 results