Filters








2,536 Hits in 2.8 sec

Neural Topic Modeling with Bidirectional Adversarial Training [article]

Rui Wang, Xuemeng Hu, Deyu Zhou, Yulan He, Yuxuan Xiong, Chenchen Ye, Haiyang Xu
2020 arXiv   pre-print
To address these limitations, we propose a neural topic modeling approach, called Bidirectional Adversarial Topic (BAT) model, which represents the first attempt of applying bidirectional adversarial training  ...  Furthermore, to incorporate word relatedness information, the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT) is extended from BAT.  ...  To address these limitations, we model topics with Dirichlet prior and propose a novel Bidirectional Adversarial Topic model (BAT) based on bidirectional adversarial training.  ... 
arXiv:2004.12331v1 fatcat:xsbtuzkfk5fvvmfm57ddt4izg4

Neural Topic Modeling with Bidirectional Adversarial Training

Rui Wang, Xuemeng Hu, Deyu Zhou, Yulan He, Yuxuan Xiong, Chenchen Ye, Haiyang Xu
2020 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics   unpublished
To address these limitations, we propose a neural topic modeling approach, called Bidirectional Adversarial Topic (BAT) model, which represents the first attempt of applying bidirectional adversarial training  ...  Furthermore, to incorporate word relatedness information, the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT) is extended from BAT.  ...  To address these limitations, we model topics with Dirichlet prior and propose a novel Bidirectional Adversarial Topic model (BAT) based on bidirectional adversarial training.  ... 
doi:10.18653/v1/2020.acl-main.32 fatcat:6rghvu477rdjddu6srnyum37zu

Adversarial Training Methods for Semi-Supervised Text Classification [article]

Takeru Miyato, Andrew M. Dai, Ian Goodfellow
2021 arXiv   pre-print
We extend adversarial and virtual adversarial training to the text domain by applying perturbations to the word embeddings in a recurrent neural network rather than to the original input itself.  ...  Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting.  ...  Our bidirectional LSTM model has the same performance as a unidirectional LSTM with virtual adversarial training.  ... 
arXiv:1605.07725v4 fatcat:npgylhnfrvampcgf34oh575zka

Sentiment Transfer using Seq2Seq Adversarial Autoencoders [article]

Ayush Singh, Ritu Palod
2018 arXiv   pre-print
We propose a model combining seq2seq, autoencoders, and adversarial loss to achieve this goal.  ...  The key idea behind the proposed models is to learn separate content representations and style representations using adversarial networks.  ...  Stephen Intille for providing us with access to NVIDIA GPU which helped us reduce training and experimentation time.  ... 
arXiv:1804.04003v1 fatcat:4qccinw4ebealer7mlyc37qs2u

Neural Topic Modeling with Deep Mutual Information Estimation [article]

Kang Xu and Xiaoqiu Lu and Yuan-fang Li and Tongtong Wu and Guilin Qi and Ning Ye and Dong Wang and Zheng Zhou
2022 arXiv   pre-print
In this paper, we propose a neural topic model which incorporates deep mutual information estimation, i.e., Neural Topic Modeling with Deep Mutual Information Estimation(NTM-DMIE).  ...  The emerging neural topic models make topic modeling more easily adaptable and extendable in unsupervised text mining.  ...  Wang et al [12] propose the Adversarial neural Topic Model (ATM) that is based on adversarial training.  ... 
arXiv:2203.06298v1 fatcat:vls2jisyzjgbhhy2ws77yfuti4

Neural Topic Modeling with Cycle-Consistent Adversarial Training [article]

Xuemeng Hu, Rui Wang, Deyu Zhou, Yuxuan Xiong
2020 arXiv   pre-print
The recently proposed Adversarial-neural Topic Model models topics with an adversarially trained generator network and employs Dirichlet prior to capture the semantic patterns in latent topics.  ...  To overcome such limitations, we propose Topic Modeling with Cycle-consistent Adversarial Training (ToMCAT) and its supervised version sToMCAT.  ...  topic model utilizing adversarial training. • BAT (Wang et al., 2020) , a neural topic model utilizing bidirectional adversarial training.  ... 
arXiv:2009.13971v1 fatcat:humpi53tfbbprf3pq2pizek4nu

Adversarial Learning of Poisson Factorisation Model for Gauging Brand Sentiment in User Reviews [article]

Runcong Zhao and Lin Gui and Gabriele Pergola and Yulan He
2021 arXiv   pre-print
BTM is built on the Poisson factorisation model with the incorporation of adversarial learning. It has been evaluated on a dataset constructed from Amazon reviews.  ...  Different from existing models for sentiment-topic extraction which assume topics are grouped under discrete sentiment categories such as 'positive', 'negative' and 'neural', BTM is able to automatically  ...  further extended the ATM model with a Bidirectional Adversarial Topic (BAT) model, using a bidirectional adversarial training to incorporate a Dirichlet distribution as prior and exploit the information  ... 
arXiv:2101.10150v1 fatcat:hyxyreo5cnenvnwczqil57tqgm

AC-BLSTM: Asymmetric Convolutional Bidirectional LSTM Networks for Text Classification [article]

Depeng Liang, Yongdong Zhang
2017 arXiv   pre-print
In this work, we propose a novel framework called AC-BLSTM for modeling sentences and documents, which combines the asymmetric convolution neural network (ACNN) with the Bidirectional Long Short-Term Memory  ...  In order to further improve the performance of AC-BLSTM, we propose a semi-supervised learning framework called G-AC-BLSTM for text classification by combining the generative model with AC-BLSTM.  ...  Deep generative image models using a laplacian pyramid of adversarial networks.  ... 
arXiv:1611.01884v3 fatcat:mbxbbdcws5drdmcfqk3qe45fyi

Robust Neural Machine Translation with Doubly Adversarial Inputs

Yong Cheng, Lu Jiang, Wolfgang Macherey
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
We propose an approach to improving the robustness of NMT models, which consists of two parts: (1) attack the translation model with adversarial source examples; (2) defend the translation model with adversarial  ...  Neural machine translation (NMT) often suffers from the vulnerability to noisy perturbations in the input.  ...  We treated the single part of parallel corpus as monolingual data to train bidirectional language models without introducing additional data.  ... 
doi:10.18653/v1/p19-1425 dblp:conf/acl/ChengJM19 fatcat:6ap3euczwfd4bn37mjfai5klhm

A Neural Lip-Sync Framework for Synthesizing Photorealistic Virtual News Anchors [article]

Ruobing Zheng, Zhou Zhu, Bo Song, Changjiang Ji
2021 arXiv   pre-print
Lack of natural appearance, visual consistency, and processing efficiency are the main problems with existing methods.  ...  Experiments also show the framework has advantages over modern neural-based methods in both visual appearance and efficiency.  ...  We show that the adversarial temporal convolutional networks are an effective solution for modeling the target seq-to-seq mapping.  ... 
arXiv:2002.08700v2 fatcat:cyyrena2ujgmriokwa3wludjqq

Topics to Avoid: Demoting Latent Confounds in Text Classification [article]

Sachin Kumar, Shuly Wintner, Noah A. Smith, Yulia Tsvetkov
2021 arXiv   pre-print
We propose a method that represents the latent topical confounds and a model which "unlearns" confounding features by predicting both the label of the input text and the confound; but we train the two  ...  Despite impressive performance on many text classification tasks, deep neural networks tend to learn frequent superficial patterns that are specific to the training data and do not always generalize well  ...  During the ith "topic training" phase, we train a new adversary adv i (with parameters θ a i instead of re-training only one adversary over and over again.  ... 
arXiv:1909.00453v2 fatcat:sevzn5z4lfhq3es5bhrenk527q

Cross-modal Adversarial Reprogramming [article]

Paarth Neekhara, Shehzeen Hussain, Jinglong Du, Shlomo Dubnov, Farinaz Koushanfar, Julian McAuley
2021 arXiv   pre-print
With the abundance of large-scale deep learning models, it has become possible to repurpose pre-trained networks for new tasks.  ...  We analyze the feasibility of adversarially repurposing image classification neural networks for Natural Language Processing (NLP) and other sequence classification tasks.  ...  Model Hyper-parameter details of benchmark classifiers For training the benchmark neural-text classifiers, we use the Bi-LSTM and 1D-CNN model with a softmax classification head.  ... 
arXiv:2102.07325v3 fatcat:bddexr7hjnefhbsdapkq3r7ome

Topics to Avoid: Demoting Latent Confounds in Text Classification

Sachin Kumar, Shuly Wintner, Noah A. Smith, Yulia Tsvetkov
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
We propose a method that represents the latent topical confounds and a model which "unlearns" confounding features by predicting both the label of the input text and the confound; but we train the two  ...  Despite impressive performance on many text classification tasks, deep neural networks tend to learn frequent superficial patterns that are specific to the training data and do not always generalize well  ...  We trained a standard (non-adversarial) classifier, with a bidirectional LSTM encoder followed by two feedforward layers with a tanh activation function and a softmax in the final layer (full experimental  ... 
doi:10.18653/v1/d19-1425 dblp:conf/emnlp/KumarWST19 fatcat:vntaddoipfcdtieugamjzgkviq

Improved Dynamic Memory Network for Dialogue Act Classification with Adversarial Training [article]

Yao Wan, Wenqiang Yan, Jianwei Gao, Zhou Zhao, Jian Wu, Philip S. Yu
2018 arXiv   pre-print
Moreover, we apply adversarial training to train our proposed model. We evaluate our model on two public datasets, i.e., Switchboard dialogue act corpus and the MapTask corpus.  ...  Extensive experiments show that our proposed model is not only robust, but also achieves better performance when compared with some state-of-the-art baselines.  ...  Moreover, we train our model via adversarial training which is a process of training a model to correctly classify both unmodified examples and adversarial examples.  ... 
arXiv:1811.05021v1 fatcat:czvoihwvsnccxossnctqvom6wa

Posterior-GAN: Towards Informative and Coherent Response Generation with Posterior Generative Adversarial Network

Shaoxiong Feng, Hongshen Chen, Kan Li, Dawei Yin
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Besides, the general dull response problem is even worsened when the model is confronted with meaningless response training instances.  ...  Neural conversational models learn to generate responses by taking into account the dialog history.  ...  Compared to general query-response tuples, the triples help the model use bidirectional information to learn the response generation in training. • We propose a novel encoder-decoder based generative adversarial  ... 
doi:10.1609/aaai.v34i05.6273 fatcat:tx3mj26xy5dijizdb6oddc47da
« Previous Showing results 1 — 15 out of 2,536 results