Filters








6,593 Hits in 8.7 sec

Improve Diverse Text Generation by Self Labeling Conditional Variational Auto Encoder [article]

Yuchi Zhang, Yongliang Wang, Liping Zhang, Zhiqiang Zhang, Kun Gai
2019 arXiv   pre-print
Diversity plays a vital role in many text generating applications. In recent years, Conditional Variational Auto Encoders (CVAE) have shown promising performances for this task.  ...  To accelerate the research of diverse text generation, we also propose a large native one-to-many dataset.  ...  It leads the encoder to reach an equilibrium at which the decoder can take full advantage of the latent variable. Experiments show that SLCAVE largely improves the generating diversity.  ... 
arXiv:1903.10842v1 fatcat:yxonidcevffzlbk5vv34qvo5t4

Complementary Auxiliary Classifiers for Label-Conditional Text Generation

Yuan Li, Chunyuan Li, Yizhe Zhang, Xiujun Li, Guoqing Zheng, Lawrence Carin, Jianfeng Gao
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
CARA shows consistent improvement over the previous methods on the task of label-conditional text generation, and achieves state-of-the-art on the task of attribute transfer.  ...  Learning to generate text with a given label is a challenging task because natural language sentences are highly variable and ambiguous.  ...  Auto-encoder for Conditional Text Generation The generator G is typically learned by maximizing the marginal log likelihood log p G (x) = log p G (x|z, y)p(z)p(y)dzdy, where p(y) is the label distribution  ... 
doi:10.1609/aaai.v34i05.6346 fatcat:ak2sf35nsrfhfjfpd332xlggja

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation [article]

Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari
2022 arXiv   pre-print
In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for conditional natural language generation with none or a handful of task-specific labeled examples.  ...  In order to improve compositional generalization, our model performs disentangled representation learning by introducing a prior for the latent content space and another prior for the latent label space  ...  condition could help the encoder generalize well on the novel labels.  ... 
arXiv:2202.13363v2 fatcat:e6tjmp4hcjbpxdl2qrsytvthjq

Exploring Diverse Expressions for Paraphrase Generation

Lihua Qian, Lin Qiu, Weinan Zhang, Xin Jiang, Yong Yu
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
Our experiments on two realworld datasets demonstrate that our model not only gains a significant increase in diversity but also improves generation quality over several state-of-the-art baselines.  ...  Few works have been done to solve diverse paraphrase generation.  ...  Acknowledgments The work is sponsored by Huawei Innovation Research Program. The corresponding authors Weinan Zhang and Yong Yu are also supported by NSFC (61702327, 61772333, 61632017).  ... 
doi:10.18653/v1/d19-1313 dblp:conf/emnlp/QianQZJY19 fatcat:2brf7np36bhenjdeio3ggbnzwu

CG-BERT: Conditional Text Generation with BERT for Generalized Few-shot Intent Detection [article]

Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, Philip Yu
2020 arXiv   pre-print
By modeling the utterance distribution with variational inference, CG-BERT can generate diverse utterances for the novel intents even with only a few utterances available.  ...  CG-BERT effectively leverages a large pre-trained language model to generate text conditioned on the intent label.  ...  Conditional VAE (CVAE) (Kingma et al., 2014) is proposed to improve over seq2seq models for generating more diverse and relevant text.  ... 
arXiv:2004.01881v1 fatcat:iofdy3sagbdsdmsh7rkzm32yhe

Variational Conditional GAN for Fine-grained Controllable Image Generation [article]

Mingqi Hu, Deyu Zhou, Yulan He
2019 arXiv   pre-print
In this paper, we propose a novel variational generator framework for conditional GANs to catch semantic details for improving the generation quality and diversity.  ...  However, the hidden condition information is not fully exploited, especially when the input is a class label.  ...  Mansimov et al. (2016) built an AlignDRAW model based on the recurrent variational auto-encoder to learn the alignment between text embeddings and the generating canvas.  ... 
arXiv:1909.09979v1 fatcat:4ztxszrfrvgmdesjgasf325jte

Semi-Supervision in ASR: Sequential MixMatch and Factorized TTS-Based Augmentation

Zhehuai Chen, Andrew Rosenberg, Yu Zhang, Heiga Zen, Mohammadreza Ghodsi, Yinghui Huang, Jesse Emond, Gary Wang, Bhuvana Ramabhadran, Pedro J. Moreno
2021 Conference of the International Speech Communication Association  
Semi and self-supervised training techniques have the potential to improve performance of speech recognition systems without additional transcribed speech data.  ...  The two approaches leverage vast amounts of available unspoken text and untranscribed audio. First, we present factorized multilingual speech synthesis to improve data augmentation on unspoken text.  ...  To model the prosody and increase its variability during inference, we further augment the model with a variational auto encoder (VAE) as in [30] and modify its global VAE to a hierarchical version  ... 
doi:10.21437/interspeech.2021-677 dblp:conf/interspeech/ChenRZZGHEWRM21 fatcat:zsycvz73lvbffkha3psy3mu75q

Multi-view Deep Subspace Clustering Networks [article]

Pengfei Zhu, Binyuan Hui, Changqing Zhang, Dawei Du, Longyin Wen, Qinghua Hu
2019 arXiv   pre-print
A latent space is built upon deep convolutional auto-encoders and a self-representation matrix is learned in the latent space using a fully connected layer.  ...  As different views share the same label space, the self-representation matrices of each view are aligned to the common one by a universality regularization.  ...  Auto-Encoders Auto-encoders (AE) extract features of data by mapping the data to a low-dimensional space.  ... 
arXiv:1908.01978v1 fatcat:4uf2efjh5va5lmx4ywhwng7pni

Conditioned Query Generation for Task-Oriented Dialogue Systems [article]

Stéphane d'Ascoli, Alice Coucke, Francesco Caltagirone, Alexandre Caulier, Marc Lelarge
2019 arXiv   pre-print
First we show how to optimally train and control the generation of intent-specific sentences using a conditional variational autoencoder.  ...  Comparison with two different baselines shows that our method, in the appropriate regime, consistently improves the diversity of the generated queries without compromising their quality.  ...  Figs. 3 & 4 show results obtained by training a very simple two-layer fully-connected conditional variational auto encoder for 200 epochs, for various values of the transfer parameter α.  ... 
arXiv:1911.03698v1 fatcat:r2nvvem3hfb3jowgmnjkfldnwu

A Primer on Contrastive Pretraining in Language Processing: Methods, Lessons Learned and Perspectives [article]

Nils Rethmeier, Isabelle Augenstein
2021 arXiv   pre-print
In this survey, we summarize recent self-supervised and supervised contrastive NLP pretraining methods and describe where they are used to improve language modeling, few or zero-shot learning, pretraining  ...  Contrastive self-supervised training objectives enabled recent successes in image representation pretraining by learning to contrast input-input pairs of augmented images as either similar or dissimilar  ...  Text generation as a discriminative EBM: [Deng et al., 2020] combine an auto-regressive language model, with a contrastive text continuation EBM model for improved text generation.  ... 
arXiv:2102.12982v1 fatcat:ivzglgl3zvczddywwwjdqewkmi

Multimodal Image Synthesis and Editing: A Survey [article]

Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, Shijian Lu, Lingjie Liu, Adam Kortylewski, Christian Theobalt, Eric Xing
2022 arXiv   pre-print
We then describe multimodal image synthesis and editing approaches extensively with detailed frameworks including Generative Adversarial Networks (GANs), Auto-regressive models, Diffusion models, Neural  ...  This is followed by a comprehensive description of benchmark datasets and corresponding evaluation metrics as widely adopted in multimodal image synthesis and editing, as well as detailed comparisons of  ...  As a pioneering effort in multimodal image synthesis, [5] shows that recurrent variational auto-encoder could generate novel visual scenes conditioned on image captions.  ... 
arXiv:2112.13592v3 fatcat:46twjhz3hbe6rpm33k6ilnisga

FA-GAN: Feature-Aware GAN for Text to Image Synthesis [article]

Eunyeong Jeon, Kunhee Kim, Daijin Kim
2021 arXiv   pre-print
Secondly, we introduce a feature-aware loss to provide the generator more direct supervision by employing the feature representation from the self-supervised discriminator.  ...  To address this issue, we propose Feature-Aware Generative Adversarial Network (FA-GAN) to synthesize a high-quality image by integrating two techniques: a self-supervised discriminator and a feature-aware  ...  The selfsupervised discriminator with an extra decoder extracts better feature representation by auto-encoding training.  ... 
arXiv:2109.00907v1 fatcat:yzq3s4tbprg7xmslvhdpxtokni

Self-supervised Learning: Generative or Contrastive [article]

Xiao Liu, Fanjin Zhang, Zhenyu Hou, Zhaoyu Wang, Li Mian, Jing Zhang, Jie Tang
2021 arXiv   pre-print
However, its deficiencies of dependence on manual labels and vulnerability to attacks have driven people to explore a better solution.  ...  We comprehensively review the existing empirical methods and summarize them into three main categories according to their objectives: generative, contrastive, and generative-contrastive (adversarial).  ...  ACKNOWLEDGMENTS The work is supported by the National Key R&D Program of China (2018YFB1402600), NSFC for Distinguished Young Scholar (61825602), and NSFC (61836013).  ... 
arXiv:2006.08218v5 fatcat:t324amt3lzaehfa262xbn5hkqe

Conditioned Text Generation with Transfer for Closed-Domain Dialogue Systems [article]

Stéphane d'Ascoli, Alice Coucke, Francesco Caltagirone, Alexandre Caulier, Marc Lelarge
2020 arXiv   pre-print
First we show how to optimally train and control the generation of intent-specific sentences using a conditional variational autoencoder.  ...  Comparison with two different baselines shows that this method, in the appropriate regime, consistently improves the diversity of the generated queries without compromising their quality.  ...  Using a Conditional Variational Auto-Encoder [18] (CVAE), we show how it is possible to selectively extract the valuable information from the reservoir dataset.  ... 
arXiv:2011.02143v1 fatcat:xdidba6d35hnbbc7mhwls5oykq

MeronymNet: A Hierarchical Approach for Unified and Controllable Multi-Category Object Generation [article]

Rishabh Baghel, Abhishek Trivedi, Tejas Ravichandran, Ravi Kiran Sarvadevabhatla
2021 arXiv   pre-print
We use Graph Convolutional Networks, Deep Recurrent Networks along with custom-designed Conditional Variational Autoencoders to enable flexible, diverse and category-aware generation of 2-D objects in  ...  We adopt a guided coarse-to-fine strategy involving semantically conditioned generation of bounding box layouts, pixel-level part layouts and ultimately, the object depictions themselves.  ...  MERONYMNET We begin with a brief overview of Variational Auto Encoder (VAE) [19] and its extension Conditional Variational Auto Encoder (CVAE) [32] .  ... 
arXiv:2110.08818v1 fatcat:gzbovnctkfckhecop45wcbzwry
« Previous Showing results 1 — 15 out of 6,593 results