6,074 Hits in 5.0 sec

Evaluating Generative Adversarial Networks on Explicitly Parameterized Distributions [article]

Shayne O'Brien, Matt Groh, Abhimanyu Dubey
2018 arXiv   pre-print
Rather than designing metrics for feature spaces with unknown characteristics, we propose to measure GAN performance by evaluating on explicitly parameterized, synthetic data distributions.  ...  The true distribution parameterizations of commonly used image datasets are inaccessible.  ...  Introduction Generative adversarial network (GAN) optimization stability and convergence properties remain poorly understood despite the introduction of hundreds of GAN variants since their conception  ... 
arXiv:1812.10782v1 fatcat:vkeo3bw42vajfbewfefx7645l4

Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models [article]

Aditya Grover, Manik Dhar, Stefano Ermon
2018 arXiv   pre-print
To bridge this gap, we propose Flow-GANs, a generative adversarial network for which we can perform exact likelihood evaluation, thus supporting both adversarial and maximum likelihood training.  ...  Implicit models such as generative adversarial networks (GAN) often generate better samples compared to explicit models trained by maximum likelihood.  ...  It is a generative adversarial network which allows for tractable likelihood evaluation, exactly like in a flow model.  ... 
arXiv:1705.08868v2 fatcat:2ijjrwwbxfepvmbdiekyfwrq24

Discriminative Feature Selection via A Structured Sparse Subspace Learning Module

Zheng Wang, Feiping Nie, Lai Tian, Rong Wang, Xuelong Li
2020 Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence  
the proposed S^3L module is presented to explicitly solve the proposed problem with a closed-form solution and strict convergence proof.  ...  improves the discriminability of model and overcomes the parameter-tuning trouble with comparison to the methods used L2,1-norm regularization; 2) An alternative iterative optimization algorithm based on  ...  A generative classifier explicitly model conditional distributions * Contact Author of inputs given the class labels.  ... 
doi:10.24963/ijcai.2020/412 dblp:conf/ijcai/WangY20a fatcat:iih2ntng7nh3rgu6n3muqgg3oq

Modeling Adversarial Noise for Adversarial Training [article]

Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
2022 arXiv   pre-print
Empirical evaluations demonstrate that our method could effectively improve adversarial accuracy.  ...  Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.  ...  The transition matrix explicitly models adversarial noise and help us to infer natural labels. We design a transition network to generate the instance-independent transition matrix.  ... 
arXiv:2109.09901v4 fatcat:amb5eg5u6ne53piovapovqtcjy

Prior Networks for Detection of Adversarial Attacks [article]

Andrey Malinin, Mark Gales
2018 arXiv   pre-print
One of the advantages of this approach is that the behaviour of a Prior Network can be explicitly tuned to, for example, predict high uncertainty in regions where there are no training data samples.  ...  Even when the adversarial attacks are constructed with full knowledge of the detection mechanism, it is shown to be highly challenging to successfully generate an adversarial sample.  ...  Prior Network parameterize a distribution over output distributions which allows them to separately model data uncertainty and distributional uncertainty.  ... 
arXiv:1812.02575v1 fatcat:hubudxmpbzd4ho5zkie6swnsia

MR-Contrast-Aware Image-to-Image Translations with Generative Adversarial Networks [article]

Jonas Denck, Jens Guehring, Andreas Maier, Eva Rothgang
2021 arXiv   pre-print
Methods Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time.  ...  Our approach is motivated by style transfer networks, whereas the "style" for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on  ...  Image-to-Image Generative Adversarial Network Image-to-image GANs, e.g., pix2pix, mainly focus on the pixel-to-pixel image synthesis.  ... 
arXiv:2104.01449v1 fatcat:qxlmnp2nrvatrjm7jljzg6odee

Variational Inference for Graph Convolutional Networks in the Absence of Graph Data and Adversarial Settings [article]

Pantelis Elinas, Edwin V. Bonilla, Louis Tiao
2020 arXiv   pre-print
data and when the network structure is subjected to adversarial perturbations.  ...  We show that, on real datasets, our approach can outperform state-of-the-art Bayesian and non-Bayesian graph neural network algorithms on the task of semi-supervised classification in the absence of graph  ...  This work was conducted in partnership with the Defence Science and Technology Group, through the Next Generation Technologies Program.  ... 
arXiv:1906.01852v5 fatcat:uxkkrn2klfbllap7srrsl46jbq

Improving Generalization of Reinforcement Learning with Minimax Distributional Soft Actor-Critic [article]

Yangang Ren, Jingliang Duan, Shengbo Eben Li, Yang Guan, Qi Sun
2020 arXiv   pre-print
Distributional framework aims to learn a state-action return distribution, from which we can model the risk of different returns explicitly, thereby formulating a risk-averse protagonist policy and a risk-seeking  ...  adversarial policy.  ...  J(µ) = E E Z(s,a,u)∼Z θ [Z(s, a, u)] − λ u σ(Z( (10) Suppose the mean Q θ and variance σ θ of the return distribution can be explicitly parameterized by parameters θ.  ... 
arXiv:2002.05502v2 fatcat:tznlf7mb25fsri3yzetehjuvgm

Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders [article]

Ananya Harsh Jha, Saket Anand, Maneesh Singh, V. S. R. Veeravasarapu
2018 arXiv   pre-print
We show compelling results of disentangled latent subspaces on three datasets and compare with recent works that leverage adversarial training.  ...  Our non-adversarial approach is in contrast with the recent works that combine adversarial training with auto-encoders to disentangle representations.  ...  Generative Adversarial Networks. GANs [4] have been shown to model complex, high dimensional data distributions and generate novel samples from it.  ... 
arXiv:1804.10469v1 fatcat:jrms7ngfu5c4peg5zsk7td7qhi

Visual Transfer for Reinforcement Learning via Wasserstein Domain Confusion [article]

Josh Roy, George Konidaris
2020 arXiv   pre-print
We introduce Wasserstein Adversarial Proximal Policy Optimization (WAPPO), a novel algorithm for visual transfer in Reinforcement Learning that explicitly learns to align the distributions of extracted  ...  WAPPO approximates and minimizes the Wasserstein-1 distance between the distributions of features from source and target domains via a novel Wasserstein Confusion objective.  ...  It jointly trains an adversary network to classify samples as real or fake and a generator network to "fool" the adversary, minimizing the Jensen-Shannon divergence (JS divergence) between the distributions  ... 
arXiv:2006.03465v1 fatcat:yfobpv4tdrhp5njilmgygf5qnu

Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness [article]

Andrey Malinin, Mark Gales
2019 arXiv   pre-print
Second, taking advantage of this new training criterion, this paper investigates using Prior Networks to detect adversarial attacks and proposes a generalized form of adversarial training.  ...  This addresses issues in the nature of the training data target distributions, enabling prior networks to be successfully trained on classification tasks with arbitrarily many classes, as well as improving  ...  Acknowledgments This paper reports on research partly supported by Cambridge Assessment, University of Cambridge. This work is also partly funded by a DTA EPSRC award. Enhj  ... 
arXiv:1905.13472v2 fatcat:fcw2uue5vngvnhth35ajnt3d2i

MR-contrast-aware image-to-image translations with generative adversarial networks

Jonas Denck, Jens Guehring, Andreas Maier, Eva Rothgang
2021 International Journal of Computer Assisted Radiology and Surgery  
Methods Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time.  ...  Our approach is motivated by style transfer networks, whereas the "style" for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on  ...  Material and methods Generative adversarial network In this section, we want to give a short overview of generative adversarial networks and important adaptions for medical image-to-image synthesis.  ... 
doi:10.1007/s11548-021-02433-x pmid:34148167 pmcid:PMC8616894 fatcat:vxb6lqgknng6jkdywnba6zftnm

Concept-Oriented Deep Learning: Generative Concept Representations [article]

Daniel T. Chang
2018 arXiv   pre-print
We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative  ...  Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised  ...  All of the generative models [7] represent probability distributions over multiple variables in some way. Some allow the probability distribution function to be evaluated explicitly.  ... 
arXiv:1811.06622v1 fatcat:lib4mdlo5vfk3cek4sr2ubsb4m

Variational Leakage: The Role of Information Complexity in Privacy Leakage [article]

Amir Ahooye Atashin, Behrooz Razeghi, Deniz Gündüz, Slava Voloshynovskiy
2021 arXiv   pre-print
Considering the supervised representation learning setup and using neural networks to parameterize the variational bounds of information quantities, we study the impact of the following factors on the  ...  We conduct extensive experiments on Colored-MNIST and CelebA datasets to evaluate the effect of information complexity on the amount of intrinsic leakage.  ...  The common approach is to use deep neural networks (DNNs) to model/parameterized these distributions.  ... 
arXiv:2106.02818v1 fatcat:5q6ww2tyjnbrdp2m45zcmb7yky

Open-set Adversarial Defense [article]

Rui Shao and Pramuditha Perera and Pong C. Yuen and Vishal M. Patel
2020 arXiv   pre-print
Furthermore, we show that adversarial defense mechanisms trained on known classes do not generalize well to open-set samples.  ...  The objective of open-set recognition is to identify samples from open-set classes during testing, while adversarial defense aims to defend the network against images with imperceptible adversarial perturbations  ...  It provides defense against adversarial attacks by training the network on adversarially perturbed images generated on-the-fly based on model's current parameters.  ... 
arXiv:2009.00814v1 fatcat:ovioqofwxnembaodmbauizbqfi
« Previous Showing results 1 — 15 out of 6,074 results