Filters








19,707 Hits in 5.5 sec

Cycles in adversarial regularized learning [article]

Panayotis Mertikopoulos, Christos Papadimitriou, Georgios Piliouras
2017 arXiv   pre-print
Regularized learning is a fundamental technique in online optimization, machine learning and many other fields of computer science.  ...  We study a natural formulation of this problem by coupling regularized learning dynamics in zero-sum games.  ...  Recurrence in adversarial regularized learning In this section, our aim is to take a closer look at the ramifications of fast regret minimization under (FoReL) beyond convergence to the set of coarse correlated  ... 
arXiv:1709.02738v1 fatcat:dimq2dcy25dp7boai34w7wevly

Cycles in Adversarial Regularized Learning [chapter]

Panayotis Mertikopoulos, Christos Papadimitriou, Georgios Piliouras
2018 Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms  
Regularized learning is a fundamental technique in online optimization, machine learning and many other fields of computer science.  ...  We study a natural formulation of this problem by coupling regularized learning dynamics in zero-sum games.  ...  Recurrence in adversarial regularized learning In this section, our aim is to take a closer look at the ramifications of fast regret minimization under (FoReL) beyond convergence to the set of coarse correlated  ... 
doi:10.1137/1.9781611975031.172 dblp:conf/soda/MertikopoulosPP18 fatcat:g7ikigmmqjgilipy3juo7uwtvq

Multi-Task Multi-Network Joint-Learning of Deep Residual Networks and Cycle-Consistency Generative Adversarial Networks for Robust Speech Recognition

Shengkui Zhao, Chongjia Ni, Rong Tong, Bin Ma
2019 Interspeech 2019  
In this work, we adopt a more advanced cycleconsistency GAN (CycleGAN) to address the training failure problem due to mode collapse of regular GANs.  ...  Recently, the developments of multitask joint-learning scheme that addresses noise reduction and ASR criteria in a unified modeling framework show promising improvements, but the model training highly  ...  Therefore, the adversarial learning process of CycleGANs contains two types of loss optimizations: adversarial loss and cycle consistency loss.  ... 
doi:10.21437/interspeech.2019-2078 dblp:conf/interspeech/ZhaoNTM19 fatcat:3bfy4hfeybgrtfqjo5buk4f43q

Duality Regularization for Unsupervised Bilingual Lexicon Induction [article]

Xuefeng Bai and Yue Zhang and Hailong Cao and Tiejun Zhao
2019 arXiv   pre-print
In this paper, we propose to train primal and dual models jointly, using regularizers to encourage consistency in back translation cycles.  ...  Unsupervised bilingual lexicon induction naturally exhibits duality, which results from symmetry in back-translation. For example, EN-IT and IT-EN induction can be mutually primal and dual problems.  ...  As a result, the proposed model has two learning objectives: i) an adversarial loss ( adv ) for each model as in the baseline. ii) a cycle consistency loss ( cycle ) on each side to avoid F and G from  ... 
arXiv:1909.01013v1 fatcat:nqndyhydtrbabg6uwejs3krqei

OnlineAugment: Online Data Augmentation with Less Domain Knowledge [article]

Zhiqiang Tang, Yunhe Gao, Leonid Karlinsky, Prasanna Sattigeri, Rogerio Feris, Dimitris Metaxas
2020 arXiv   pre-print
In this work, we offer an orthogonal online data augmentation scheme together with three new augmentation networks, co-trained with the target learning task.  ...  Recently, great advances have been made in searching for optimal augmentation policies in the image classification domain.  ...  To pilot adversarial training for generalization, we design new regularization terms and add meta-learning in OnlineAugment.  ... 
arXiv:2007.09271v2 fatcat:smuh622agvfmneznwlqqbxclqq

Improved Network Robustness with Adversary Critic [article]

Alexander Matyasko, Lap-Pui Chau
2018 arXiv   pre-print
We formulate a problem of learning robust classifier in the framework of Generative Adversarial Networks (GAN), where the adversarial attack on classifier acts as a generator, and the critic network learns  ...  to distinguish between regular and adversarial images.  ...  We thank NVIDIA Corporation for the donation of the GeForce Titan X and GeForce Titan X (Pascal) used in this research.  ... 
arXiv:1810.12576v1 fatcat:sj77lhqxa5glxfqmuddg35qere

ALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching [article]

Chunyuan Li, Hao Liu, Changyou Chen, Yunchen Pu, Liqun Chen, Ricardo Henao, Lawrence Carin
2017 arXiv   pre-print
Further, we introduce an extension for semi-supervised learning tasks. Theoretical results are validated in synthetic data and real-world applications.  ...  We unify a broad family of adversarial models as joint distribution matching problems. Our approach stabilizes learning of unsupervised bidirectional adversarial learning methods.  ...  This research was supported in part by ARO, DARPA, DOE, NGA, ONR and NSF.  ... 
arXiv:1709.01215v2 fatcat:dw7iht7bpzehjcm2vmzozar3x4

Image to Image Translation using Deep Learning Techniques

S. Ramya, S. Anchana, A.M. Bavidhraa, R. Devanand
2020 International Journal of Computer Applications  
Our innovation is we train and test the cycle-consistent adversarial networks using dataset.  ...  We have tried to fine-tune hyper parameters such as batch size, learning rate, lambda in loss function and trying to add dropout to our network.  ...  Cycle-Consistent Adversarial Networks The second method is combining adversarial losses and cycle consistency losses to learn mapping functions between two domains.  ... 
doi:10.5120/ijca2020920745 fatcat:kilrichqqzetfeo2vxaemijddm

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer [article]

Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, Libing Wu
2022 arXiv   pre-print
In particular, we introduce a new regularization module along with a joint training strategy to reconcile the conflicts between the adversarial noises and the cycle consistence loss in makeup transfer,  ...  In this paper, we propose adversarial makeup transfer GAN (AMT-GAN), a novel face protection method aiming at constructing adversarial face images that preserve stronger black-box transferability and better  ...  Leo's work is supported in part by the National Natural Science Foundation of China (Grant No. 61702221). Libing's work is supported by Key R&D plan of Hubei Province (No. 2021BAA025).  ... 
arXiv:2203.03121v2 fatcat:kogfrvyslvhqli3tdfylfkvqv4

CANZSL: Cycle-Consistent Adversarial Networks for Zero-Shot Learning from Natural Language [article]

Zhi Chen, Jingjing Li, Yadan Luo, Zi Huang, Yang Yang
2019 arXiv   pre-print
To address this issue, we propose a novel method named Cycle-consistent Adversarial Networks for Zero-Shot Learning (CANZSL).  ...  Specifically, a multi-modal consistent bidirectional generative adversarial network is trained to handle unseen instances by leveraging noise in the natural language.  ...  Conclusions In this paper, we proposed novel cycle-consistent adversarial networks for ZSL from natural language, which leverage multi-modal cycle-consistency loss to regularize the visual feature generator  ... 
arXiv:1909.09822v1 fatcat:r2cw3gt4bzgy3a554ue4yg4rum

Implicit Autoencoders [article]

Alireza Makhzani
2019 arXiv   pre-print
We use two generative adversarial networks to define the reconstruction and the regularization cost functions of the implicit autoencoder, and derive the learning rules based on maximum-likelihood learning  ...  We show the applications of implicit autoencoders in disentangling content and style information, clustering, semi-supervised classification, learning expressive variational distributions, and multimodal  ...  We further proposed cycle implicit autoencoders and showed that they can learn multimodal image-to-image mappings.  ... 
arXiv:1805.09804v2 fatcat:mwjpf6wuqnhqdny3ockp4hh2b4

Calligraphy Fonts Generation Based on Generative Adversarial Networks

Guozhou Zhang, Wencheng Huang, Ru Chen, Jinyu Yang, Hong Peng
2019 Innovative Computing Information and Control Express Letters, Part B: Applications  
This paper extends the study of style transfer to the calligraphy fonts, and proposes a method based on generative adversarial networks (GAN).  ...  Style transfer is a hot research topic in the field of image processing in recent years, but the current studies on style transfer mainly focus on the oil paintings, landscape paintings and other images  ...  An example of the architecture of our method is shown in Figure 2 . 3.1. Loss function. Our total loss consists of two parts: adversarial loss, and cycle consistency loss.  ... 
doi:10.24507/icicelb.10.03.203 fatcat:et57tdueqjglzbgj2jjbzpj5lu

Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [article]

Junhao Dong, Yuan Wang, Jianhuang Lai, Xiaohua Xie
2022 arXiv   pre-print
Specially, we propose the Transferable Cycle Adversary Generative Adversarial Network (TCA-GAN) to construct the adversarial perturbation for disrupting unknown DeepFake systems.  ...  We also present a novel post-regularization module for enhancing the transferability of generated adversarial examples.  ...  Concretely, we mainly assess the disruption on face-swapped images in terms of the cycle-consistency, the latent variable disruption, and the post-regularization.  ... 
arXiv:2204.12347v1 fatcat:76rcbkxadfbmpl2j7a3pekl3gi

Pseudo Conditional Regularization for Inverse Mapping of GANs

Sang-Heon Shim, Jae-Pil Heo
2020 IEEE Access  
We demonstrate that our novel bidirectional adversarial learning frameworks improve the performance in sample reconstruction, generation, and interpolation.  ...  Our models are specifically guided by the pseudo conditions defined by the proximity relationship among data in unsupervised learned feature space.  ...  If the adversarial learning based on Eq. 8 enables to form regularized latent space with G P , E P , and D P , then it seems to be also possible to train G I and E I by an adversarial learning with D P  ... 
doi:10.1109/access.2020.2992850 fatcat:r5rjuln33veydgo6ly3y5qseuu

Dual Mixup Regularized Learning for Adversarial Domain Adaptation [article]

Yuan Wu, Diana Inkpen, Ahmed El-Roby
2020 arXiv   pre-print
In order to alleviate the above issues, we propose a dual mixup regularized learning (DMRL) method for UDA, which not only guides the classifier in enhancing consistent predictions in-between samples,  ...  Recent advances on unsupervised domain adaptation (UDA) rely on adversarial learning to disentangle the explanatory and transferable features for domain adaptation.  ...  Dual Mixup Regularization In this work, we propose a dual mixup regularized learning (DMRL) method based on adversarial domain adaptation.  ... 
arXiv:2007.03141v2 fatcat:fdmh32xdijerdiopgsbls3qvqi
« Previous Showing results 1 — 15 out of 19,707 results