A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Towards Autoencoding Variational Inference for Aspect-based Opinion Summary
[article]
2019
arXiv
pre-print
Ultimately, we present the Autoencoding Variational Inference for Joint Sentiment/Topic (AVIJST) model. ...
Firstly, we introduce the Autoencoding Variational Inference for Aspect Discovery (AVIAD) model, which extends the previous work of Autoencoding Variational Inference for Topic Models (AVITM) to embed ...
Furthermore, the model II semi supervised variational autoencoder (SSVAEII-MLP and SSVAEII-CNN) which was first proposed for semi-supervised problem in Kingma, Rezende, Mohamed, and Welling (2014) is ...
arXiv:1902.02507v2
fatcat:arwqphelbnfzpba26vpcc4dfcq
Improved Variational Autoencoders for Text Modeling using Dilated Convolutions
[article]
2017
arXiv
pre-print
Further, we conduct an in-depth investigation of the use of VAE (with our new decoding architecture) for semi-supervised and unsupervised labeling tasks, demonstrating gains over several strong baselines ...
We show that with the right decoder, VAE can outperform LSTM language models. ...
& Le, 2015) ) on both text categorization and sentiment analysis. ...
arXiv:1702.08139v2
fatcat:liugemfo5jeblczitt5xpy2fxm
Tibetan Sentiment Classification Method Based on Semi-Supervised Recursive Autoencoders
2019
Computers Materials & Continua
We apply the semi-supervised recursive autoencoders (RAE) model for the sentiment classification task of Tibetan short text, and we obtain a better classification effect. ...
The input of the semi-supervised RAE model is the word vector. ...
Based on the above related work, this paper applies the semi-supervised recursive autoencoders RAE model to sentiment classification task of Tibetan short text. ...
doi:10.32604/cmc.2019.05157
fatcat:7jzusdxtsncsheikdjqgcxg4cy
Deep Variational Semi-Supervised Novelty Detection
[article]
2021
arXiv
pre-print
In semi-supervised AD (SSAD), the data also includes a small sample of labeled anomalies. In this work, we propose two variational methods for training VAEs for SSAD. ...
A recent and promising approach to AD relies on deep generative models, such as variational autoencoders (VAEs), for unsupervised learning of the normal data distribution. ...
Semi-Supervised Deep Generative Models (SS-DGM) [24] proposed a deep variational generative approach to semi-supervised learning. ...
arXiv:1911.04971v3
fatcat:m2ymdjeuuvaodjnvudkr3bveqy
Sentiment Analysis of Social Media via Multimodal Feature Fusion
2020
Symmetry
Most of the information posted by users on social media has obvious sentimental aspects, and multimodal sentiment analysis has become an important research field. ...
Previous studies on multimodal sentiment analysis have primarily focused on extracting text and image features separately and then combining them for sentiment classification. ...
denoising autoencoder, and classified textual and image-fusion features in an unsupervised and semi-supervised manner. ...
doi:10.3390/sym12122010
fatcat:r4xhs7pblneyvh67rw2ydfsddi
Deep Learning Models in Software Requirements Engineering
[article]
2021
arXiv
pre-print
In this article we have accomplished the first step of the research on this topic: we have applied the vanilla sentence autoencoder to the sentence generation task and evaluated its performance. ...
Semi-supervised Sequential Variational Autoencoder (SS-VAE), suggested by Xu et al. in [32] , targets the text classification task. ...
Deep learning can be supervised, semi-supervised, or unsupervised. In supervised learning the data in the training set has labels, such as class name or target numeric value. ...
arXiv:2105.07771v1
fatcat:fxuknpdgmjahlcogbg3t3wy3f4
Table of Contents [EDICS]
2020
IEEE/ACM Transactions on Audio Speech and Language Processing
Richard 2638 Semi-Supervised Neural Chord Estimation Based on a Variational Autoencoder With Latent Chord Labels and Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ...
Lee 876 Speech Analysis Semi-Supervised Speech Emotion Recognition With Ladder Networks . . . . . . . . . . . . . . . . . S. Parthasarathy and C. ...
doi:10.1109/taslp.2020.3046150
fatcat:easrxuwl6zdppejsrf4bskxfw4
A multimodal feature learning approach for sentiment analysis of social network multimedia
2015
Multimedia tools and applications
In this paper we investigate the use of a multimodal feature learning approach, using neural network based models such as Skip-gram and Denoising Autoencoders, to address sentiment analysis of micro-blogging ...
, and have shown very good performances when dealing with syntactic and semantic word similarities; ii) unsupervised learning, with neural networks, of robust visual features, that are recoverable from ...
Note that polarity supervision is limited and possibly weak, thus a robust semi-supervised setting is preferred: on the one hand, a model of sentiment polarity can use the limited supervision available ...
doi:10.1007/s11042-015-2646-x
fatcat:nbaw3j4opnagfiqmru5lpyxuda
Semi-supervised Structured Prediction with Neural CRF Autoencoder
2017
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
In this paper we propose an end-toend neural CRF autoencoder (NCRF-AE) model for semi-supervised learning of sequential structured prediction problems. ...
Our experimental results over the Part-of-Speech (POS) tagging task on eight different languages, show that the NCRF-AE model can outperform competitive systems in both supervised and semi-supervised scenarios ...
Semi-supervised Learning In the semi-supervised settings we compared our models with other semi-supervised structured prediction models. ...
doi:10.18653/v1/d17-1179
dblp:conf/emnlp/ZhangJPTG17
fatcat:74zzfoqkyjginic6dt732onxee
Cross Lingual Sentiment Analysis using Modified BRAE
2015
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing
In this paper, we use the Recursive Autoencoder architecture to develop a Cross Lingual Sentiment Analysis (CLSA) tool using sentence aligned corpora between a pair of resource rich (English) and resource ...
It is shown that our approach significantly outperforms state of the art systems for Sentiment Analysis, especially when labeled data is scarce. ...
In Fig. 3 , we show the variation in accuracy of the classifiers with amount of sentiment labeled Training data used. ...
doi:10.18653/v1/d15-1016
dblp:conf/emnlp/JainB15
fatcat:twu6u4svpffzpgfubyt5jsieiy
Comparative Study on Generative Adversarial Networks
[article]
2018
arXiv
pre-print
We study the original model proposed by Goodfellow et al. as well as modifications over the original model and provide a comparative analysis of these models. ...
The authors evaluated the performance of adversarial autoencoders on MNIST and Toronto Face datasets using loglikelihood analysis in supervised, semi-supervised and unsupervised settings. ...
The authors also show how adversarial autoencoders can be used for dimensionality reduction. ...
arXiv:1801.04271v1
fatcat:g2quw4bnfzdgpjjvt3kgiwyx44
Neural Structural Correspondence Learning for Domain Adaptation
[article]
2017
arXiv
pre-print
On the task of cross-domain product sentiment classification (Blitzer et al., 2007), consisting of 12 domain pairs, our model outperforms both the SCL and the marginalized stacked denoising autoencoder ...
., 2006)) and autoencoder neural networks. ...
There is a recent interest in models based on variational autoencoders (Kingma and Welling, 2014; Rezende et al., 2014) , for example the variational fair autoencoder model (Louizos et al., 2016) , for ...
arXiv:1610.01588v3
fatcat:ydxjqkd4pnbtzmq5yndiunstiq
Active Learning via Membership Query Synthesis for Semi-Supervised Sentence Classification
2019
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
We present the first successful attempt to use Membership Query Synthesis for generating AL queries for natural language processing, using Variational Autoencoders for query generation. ...
Training Variational Autoencoder vastava et al., 2014) . ...
Like other autoencoders, VAEs learn a mapping q θ (z|x) from high dimensional input x to a low dimensional latent variable z. ...
doi:10.18653/v1/k19-1044
dblp:conf/conll/SchumannR19
fatcat:hte7yubcijaktbfebga24satqu
Neural Structural Correspondence Learning for Domain Adaptation
2017
Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)
We experiment with the task of cross-domain sentiment classification on 16 domain pairs and show substantial improvements over strong baselines. 1 ...
., 2006)) and autoencoder neural networks (NNs). ...
There is a recent interest in models based on variational autoencoders (Kingma and Welling, 2014; Rezende et al., 2014) , for example the variational fair autoencoder model (Louizos et al., 2016) , for ...
doi:10.18653/v1/k17-1040
dblp:conf/conll/ZiserR17
fatcat:hvkpr7vulvhibcjbshc5toeqny
Deep Learning for Sentiment Analysis : A Survey
[article]
2018
arXiv
pre-print
Along with the success of deep learning in many other application domains, deep learning is also popularly used in sentiment analysis in recent years. ...
This paper first gives an overview of deep learning and then provides a comprehensive survey of its current applications in sentiment analysis. ...
Ltd with a research gift. ...
arXiv:1801.07883v2
fatcat:nplicfgaozb6fbfx4eyts4zt7e
« Previous
Showing results 1 — 15 out of 594 results