Filters








15,576 Hits in 7.4 sec

Style Example-Guided Text Generation using Generative Adversarial Transformers [article]

Kuo-Hao Zeng and Mohammad Shoeybi and Ming-Yu Liu
2020 arXiv   pre-print
The style encoder extracts a style code from the reference example, and the text decoder generates texts based on the style code and the context.  ...  We introduce a language generative model framework for generating a styled paragraph based on a context sentence and a style reference example.  ...  The proposed style example-guided text generation framework is based on the generative adversarial networks (GANs), and we utilize the transformer in both the generator and discriminator design.  ... 
arXiv:2003.00674v1 fatcat:2452647j5zhe7ouflakpolz5oe

Cycle-Consistent Adversarial Autoencoders for Unsupervised Text Style Transfer [article]

Yufang Huang, Wentao Zhu, Deyi Xiong, Yiye Zhang, Changjian Hu, Feiyu Xu
2020 arXiv   pre-print
representation into a style-transferred text, (2) adversarial style transfer networks that use an adversarially trained generator to transform a latent representation in one style into a representation  ...  In this paper, we propose a novel neural approach to unsupervised text style transfer, which we refer to as Cycle-consistent Adversarial autoEncoders (CAE) trained from non-parallel data.  ...  We use generative adversarial networks (Goodfellow et al., 2014) to learn the two transformation functions. Let's consider the learning of the transformation T 1→2 .  ... 
arXiv:2010.00735v1 fatcat:wdvglp64mng5jcoozng7pn5koq

Separating Content from Style Using Adversarial Learning for Recognizing Text in the Wild [article]

Canjie Luo, Qingxiang Lin, Yuliang Liu, Lianwen Jin, Chunhua Shen
2020 arXiv   pre-print
Therefore, the discriminator can guide the generator according to the confusion of the recognizer, so that the generated patterns are clearer for recognition.  ...  Benefiting from the character-level adversarial training, our framework requires only unpaired simple data for style supervision.  ...  If a "G" is transformed to look more like a "C" and the recognizer predicts it to be a "C", the discriminator will learn that the pattern is a "C" and guide the generator to generate a clearer "G".  ... 
arXiv:2001.04189v3 fatcat:wpqllhdse5hitaauehg3ehriia

Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives [Review Article]

Jing Han, Zixing Zhang, Bjorn Schuller
2019 IEEE Computational Intelligence Magazine  
As a potentially crucial technique for the development of the next generation of emotional AI systems, we herein provide a comprehensive overview of the application of adversarial training to affective  ...  o ver the past few years, adversarial training has become an extremely active research topic and has been successfully applied to various Artificial Intelligence (AI) domains.  ...  The VoiceGAN framework consists of two generators/transformers (GAB and ) GBA and three discriminators ( , DA , DB and . ) Dstyle GAB attempts to transform instances from style A to style , B while GBA  ... 
doi:10.1109/mci.2019.2901088 fatcat:edkvfgy3ofgufcytngf5mktpae

DANCin SEQ2SEQ: Fooling Text Classifiers with Adversarial Text Example Generation [article]

Catherine Wong
2017 arXiv   pre-print
Despite significant recent work on adversarial example generation targeting image classifiers, relatively little work exists exploring adversarial example generation for text classifiers; additionally,  ...  In this work, we introduce DANCin SEQ2SEQ, a GAN-inspired algorithm for adversarial text example generation targeting largely black-box text classifiers.  ...  Acknowledgments Many thanks to Will Monroe for his crackerjack adversarial text generation advice and expertise, and for sharing an alarming series of articles about 3D-printed turtles misclassified as  ... 
arXiv:1712.05419v1 fatcat:ccrkfg4nargw3hctkdi6iispym

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators [article]

Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or
2021 arXiv   pre-print
Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image? In other words: can an image generator be trained "blindly"?  ...  We show that through natural language prompts and a few minutes of training, our method can adapt a generator across a multitude of domains characterized by diverse styles and shapes.  ...  Attempting to guide a facegenerator conversion with the text 'doctor', for example, causes the generator to produce mostly males, while using the text 'nurse' has the opposite effect.  ... 
arXiv:2108.00946v2 fatcat:lnn4ydsoenauxbpu6ijpm3ccn4

Exploring Controllable Text Generation Techniques [article]

Shrimai Prabhumoye, Alan W Black, Ruslan Salakhutdinov
2020 arXiv   pre-print
Neural controllable text generation is an important area gaining attention due to its plethora of applications.  ...  Although there is a large body of prior work in controllable text generation, there is no unifying theme.  ...  In case of style transfer task, this loss is used to guide the generation process to output the target style tokens.  ... 
arXiv:2005.01822v2 fatcat:73tfkjvy7jcjdftlj4aurqswbu

Text Style Transfer: A Review and Experimental Evaluation [article]

Zhiqiang Hu, Roy Ka-Wei Lee, Charu C. Aggarwal, Aston Zhang
2021 arXiv   pre-print
Specifically, researchers have investigated the Text Style Transfer (TST) task, which aims to change the stylistic properties of the text while retaining its style independent content.  ...  This article aims to provide a comprehensive review of recent research efforts on text style transfer.  ...  There are two variants of the GST model: the Blind Generative Style Transformer (B-GST) and the Guided Generative Style Transformer (G-GST).  ... 
arXiv:2010.12742v2 fatcat:gmkjxf7f7jhivbo6mayaxjsk7q

A Review of Text Style Transfer using Deep Learning

Martina Toshevska, Sonja Gievska
2021 IEEE Transactions on Artificial Intelligence  
A systematic review of text style transfer methodologies using deep learning is presented in this paper.  ...  The review is structured around two key stages in the text style transfer process, namely, representation learning and sentence generation in a new style.  ...  [15] proposed two models, Blind Generative Style Transformer (B-GST) and Guided Generative Style Transformer (G-GST), that follow the same modeling approach as DeleteOnly and DeleteAndRetrieve [14]  ... 
doi:10.1109/tai.2021.3115992 fatcat:jn6trym6azdj7iomd2goyucbfi

Spatial Fusion GAN for Image Synthesis [article]

Fangneng Zhan, Hongyuan Zhu, Shijian Lu
2019 arXiv   pre-print
Recent advances in generative adversarial networks (GANs) have shown great potentials in realistic image synthesis whereas most existing works address synthesis realism in either appearance space or geometry  ...  The appearance synthesizer adjusts the color, brightness and styles of the foreground objects and embeds them into background images harmoniously, where a guided filter is introduced for detail preserving  ...  Take scene text image synthesis as an example.  ... 
arXiv:1812.05840v3 fatcat:uzent4sy35cpjkslziikyppuni

AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples

Dongyeop Kang, Tushar Khot, Ashish Sabharwal, Eduard Hovy
2018 Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)  
First, we propose knowledge-guided adversarial example generators for incorporating large lexical resources in entailment models via only a handful of rule templates.  ...  Second, to make the entailment model-a discriminator-more robust, we propose the first GAN-style approach for training it using a natural language example generator that iteratively adjusts based on the  ...  For each mini-batch, we generate new entailment examples, Z G using our adversarial examples generator.  ... 
doi:10.18653/v1/p18-1225 dblp:conf/acl/HovyKSK18 fatcat:533izj4jenckphvdhmltq3i75u

Spatial Fusion GAN for Image Synthesis

Fangneng Zhan, Hongyuan Zhu, Shijian Lu
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Recent advances in generative adversarial networks (GANs) have shown great potentials in realistic image synthesis whereas most existing works address synthesis realism in either appearance space or geometry  ...  The appearance synthesizer adjusts the color, brightness and styles of the foreground objects and embeds them into background images harmoniously, where a guided filter is introduced for detail preserving  ...  Take scene text image synthesis as an example.  ... 
doi:10.1109/cvpr.2019.00377 dblp:conf/cvpr/ZhanZL19 fatcat:mxijttgvlvfbfmuztb4nva2fpu

Adversarial Training in Affective Computing and Sentiment Analysis: Recent Advances and Perspectives [article]

Jing Han, Zixing Zhang, Nicholas Cummins, Björn Schuller
2018 arXiv   pre-print
As a potentially crucial technique for the development of the next generation of emotional AI systems, we herein provide a comprehensive overview of the application of adversarial training to affective  ...  Over the past few years, adversarial training has become an extremely active research topic and has been successfully applied to various Artificial Intelligence (AI) domains.  ...  EMOTION CONVERSION Emotion conversion is a specific style transformation task.  ... 
arXiv:1809.08927v1 fatcat:m5mencegljgsphub3p62ltrhby

Emotional Text Generation Based on Cross-Domain Sentiment Transfer

Rui Zhang, Zhenyu Wang, Kai Yin, Zhenhua Huang
2019 IEEE Access  
By combining adversarial reinforcement learning with supervised learning, our model is able to extract patterns of sentiment transformation and apply them in emotional text generation.  ...  Generative adversarial network (GAN) has shown promising results in natural language generation and data enhancement.  ...  In addition, we will also apply this model to the data-to-text generation task. FIGURE 1 . 1 An example of text style transformation. FIGURE 2 . 2 Overview of our approach.  ... 
doi:10.1109/access.2019.2931036 fatcat:mdsmfs37krc4nb2gcu6htjnmhy

AdvEntuRe: Adversarial Training for Textual Entailment with Knowledge-Guided Examples [article]

Dongyeop Kang and Tushar Khot and Ashish Sabharwal and Eduard Hovy
2018 arXiv   pre-print
First, we propose knowledge-guided adversarial example generators for incorporating large lexical resources in entailment models via only a handful of rule templates.  ...  Second, to make the entailment model - a discriminator - more robust, we propose the first GAN-style approach for training it using a natural language example generator that iteratively adjusts based on  ...  For each mini-batch, we generate new entailment examples, Z G using our adversarial examples generator.  ... 
arXiv:1805.04680v1 fatcat:5bmbx4gbdrfjniicylvgr3bscq
« Previous Showing results 1 — 15 out of 15,576 results