A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
Unrolled Generative Adversarial Networks
[article]
2017
arXiv
pre-print
We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. ...
generator. ...
GENERATIVE ADVERSARIAL NETWORKS While most deep generative models are trained by maximizing log likelihood or a lower bound on log likelihood, GANs take a radically different approach that does not require ...
arXiv:1611.02163v4
fatcat:likkc2vogjghvpgfleg76ciafe
On Instabilities of Conventional Multi-Coil MRI Reconstruction to Small Adverserial Perturbations
[article]
2021
arXiv
pre-print
We investigate instabilities caused by small adversarial attacks for multi-coil acquisitions. ...
Our results suggest that, parallel imaging and multi-coil CS exhibit considerable instabilities against small adversarial perturbations. ...
Figure 1 . 1 The process for generating the small adversarial perturbation using unrolled CG-SENSE. ...
arXiv:2102.13066v1
fatcat:mig3afgwpjdn7dswevijcn7ze4
Wasserstein GANs for MR Imaging: from Paired to Unpaired Training
[article]
2020
arXiv
pre-print
The generator is an unrolled neural network -- a cascade of convolutional and data consistency layers. ...
The reconstruction networks consist of a generator which suppresses the input image artifacts, and a discriminator using a pool of (unpaired) labels to adjust the reconstruction quality. ...
Adversarial methods used in these works were adopted from entropic generative adversarial networks (EGANs) [20] or least-squares GANs (LSGANs) [21] . ...
arXiv:1910.07048v3
fatcat:wtyyrb7b6recbf4pg263cwqd3q
Recurrent Feedback Improves Feedforward Representations in Deep Neural Networks
[article]
2019
arXiv
pre-print
In this study, we find that introducing feedback loops and horizontal recurrent connections to a deep convolution neural network (VGG16) allows the network to become more robust against noise and occlusion ...
This suggests that recurrent feedback and contextual modulation transform the feedforward representations of the network in a meaningful and interesting way. ...
The adversarial noise was generated by the standard Fast Gradient Signed Method (FGSM) attack. ...
arXiv:1912.10489v1
fatcat:76wahajejjhkrfoz6s6iugds6m
Adversarial Regularization as Stackelberg Game: An Unrolled Optimization Approach
[article]
2022
arXiv
pre-print
Adversarial regularization has been shown to improve the generalization performance of deep learning models in various natural language processing tasks. ...
Such a formulation treats the adversarial and the defending players equally, which is undesirable because only the defending player contributes to the generalization performance. ...
; Finn et al., 2017) , meta-learning (Andrychowicz et al., 2016) , and Generative Adversarial Networks (Metz et al., 2017) . ...
arXiv:2104.04886v3
fatcat:sld3r2tbrbctfcns4jhltrtf5e
Stabilizing Adversarial Nets With Prediction Methods
[article]
2018
arXiv
pre-print
We propose a simple modification of stochastic gradient descent that stabilizes adversarial networks. ...
Adversarial neural networks solve many important problems in data science, but are notoriously difficult to train. ...
GENERATIVE ADVERSARIAL NETWORKS Next, we test the efficacy and stability of our proposed predictive step on generative adversarial networks (GAN), which are formulated as saddle point problems (4) and ...
arXiv:1705.07364v3
fatcat:lddbxc24yjaq5lkemqdgofv43u
MetaPoison: Practical General-purpose Clean-label Data Poisoning
[article]
2021
arXiv
pre-print
Existing attacks for data poisoning neural networks have relied on hand-crafted heuristics, because solving the poisoning problem directly via bilevel optimization is generally thought of as intractable ...
MetaPoison can achieve arbitrary adversary goals -- like using poisons of one class to make a target image don the label of another arbitrarily chosen class. ...
Even large numbers of unroll steps may improve the performance slightly. number of unrolls, is sufficient when our ensembling and network reinitialization strategy is used. ...
arXiv:2004.00225v2
fatcat:a3fms4d3lbcpxkzniagesxd63m
Towards Robust Image Classification Using Sequential Attention Models
2020
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Finally, we show that some of the adversarial examples generated by attacking our model are quite different from conventional adversarial examples -they contain global, salient and spatially coherent structures ...
Second, we show that by varying the number of attention steps (glances/fixations) for which the model is unrolled, we are able to make its defense capabilities stronger, even in light of stronger attacks ...
Figure 8 shows several examples of generated adversarial examples for different attack strengths for an adversarially trained S3TA model (with 4 unrolling steps) and an adversarially trained ResNet-152 ...
doi:10.1109/cvpr42600.2020.00950
dblp:conf/cvpr/ZoranCHGMK20
fatcat:xagg4zxil5ggve4nht2ora5szy
Towards Robust Image Classification Using Sequential Attention Models
[article]
2019
arXiv
pre-print
Finally, we show that some of the adversarial examples generated by attacking our model are quite different from conventional adversarial examples --- they contain global, salient and spatially coherent ...
In this paper we propose to augment a modern neural-network architecture with an attention model inspired by human perception. ...
Figure 8 shows several examples of generated adversarial examples for different attack strengths for an adversarially trained S3TA model (with 4 unrolling steps) and an adversarially trained ResNet-152 ...
arXiv:1912.02184v1
fatcat:ke246kxbazbdlpllqn2adlkune
Estimating Lipschitz constants of monotone deep equilibrium models
2021
International Conference on Learning Representations
Several methods have been proposed in recent years to provide bounds on the Lipschitz constants of deep networks, which can be used to provide robustness guarantees, generalization bounds, and characterize ...
We also highlight how to use these bounds to develop PAC-Bayes generalization bounds that do not depend on any depth of the network, and which avoid the exponential depth-dependence of comparable DNN bounds ...
In Figure 8c , we can observe that the generalization bounds for these unrolled networks are larger (on the order of 10 6 ) as compared to the generalization bound for the monDEQ (on the order 10 5 ). ...
dblp:conf/iclr/PabbarajuWK21
fatcat:quk2b6jpufcm7a4vzhiada3cxy
Mode Penalty Generative Adversarial Network with adapted Auto-encoder
[article]
2020
arXiv
pre-print
Generative Adversarial Networks (GAN) are trained to generate sample images of interest distribution. ...
To this end, generator network of GAN learns implicit distribution of real data set from the classification with candidate generated samples. ...
Mode-Penalty GANs
Generative Adversarial Networks Generative Adversarial Network (GAN) is motivated by game theory in which two players (generator and discriminator) compete with each other in a zero-sum ...
arXiv:2011.07706v1
fatcat:wtzi4qqjezbsfh3mz22ofsht64
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
[article]
2017
arXiv
pre-print
But many of these methods, including generative adversarial networks (GANs), can be difficult to train, in part because they are prone to mode collapse, which means that they characterize only a few modes ...
To address this, we introduce VEEGAN, which features a reconstructor network, reversing the action of the generator by mapping from data to noise. ...
case [6, 10] and for deep generative adversarial networks (GANs) in particular [7] . ...
arXiv:1705.07761v3
fatcat:bdxgt3ipcrfelhbyr553ybsspy
Gauss-Newton Unrolled Neural Networks and Data-driven Priors for Regularized PSSE with Robustness
[article]
2020
arXiv
pre-print
To further endow the physics-based DNN with robustness against bad data, an adversarial DNN training method is discussed. ...
Interestingly, the power network topology can be easily incorporated into the DNN by designing a graph neural network (GNN) based prior. ...
To this end we preprocessed test samples using (23) to generate adversarially perturbed samples. ...
arXiv:2003.01667v3
fatcat:jciipamd7bdxdjmcwhw3ki3snq
A Review of Deep Learning Methods for Compressed Sensing Image Reconstruction and Its Medical Applications
2022
Electronics
Besides popular MSE (L2 norm) loss, adversarial loss [23] is used when training networks [24] . It was proposed firstly to train generative adversarial networks (GAN). ...
.), more effective network architectures (convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial networks (GAN)), the availability of large datasets and stronger computational ...
Meanwhile, the authors of [178] used measurements and zero-fill reconstruction as labels to train a network. ...
doi:10.3390/electronics11040586
fatcat:zoz4hlue6vcanehrdvslspjbmi
Towards Efficient and Secure Delivery of Data for Deep Learning with Privacy-Preserving
[article]
2019
arXiv
pre-print
In this setting, the attack success rate of adversary is 7.9 x 10^-90 for MoLe and 2.9 x 10^-30 for GAZELLE, respectively. ...
Privacy recently emerges as a severe concern in deep learning, that is, sensitive data must be prohibited from being shared with the third party during deep neural network development. ...
Preventing adversary from identifying or recovering training data from a published network model [21] ; 2. ...
arXiv:1909.07632v1
fatcat:gz34ef6xdfhfdg4reuiab5dai4
« Previous
Showing results 1 — 15 out of 2,428 results