43,458 Hits in 3.6 sec

Regularizing Generative Adversarial Networks under Limited Data [article]

Hung-Yu Tseng, Lu Jiang, Ce Liu, Ming-Hsuan Yang, Weilong Yang
2021 arXiv   pre-print
Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data.  ...  We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.  ...  Related Work Generative adversarial networks. Generative adversarial networks (GANs) [2, 7, 14, 27, 32, 47, 79] aim to model the target distribution using adversarial learning.  ... 
arXiv:2104.03310v1 fatcat:b5th6vdcafgc5ostnr64xdzpte

Demotivate adversarial defense in remote sensing [article]

Adrien Chan-Hon-Tong and Gaston Lenczner and Aurelien Plyer
2021 arXiv   pre-print
In this work, we study both adversarial retraining and adversarial regularization as adversarial defenses to this purpose.  ...  their ability to deal with the inherent variety of worldwide data.  ...  This is an issue for both high resolution datasets which cover limited areas due to the cost of data and for low resolution datasets which either deal with high level label only or cover limited areas  ... 
arXiv:2105.13902v1 fatcat:2tb2zvyqxrfgbklhl56lhyejyi

Regularization by Adversarial Learning for Ultrasound Elasticity Imaging [article]

Narges Mohammadi, Marvin M. Doyley, Mujdat Cetin
2021 arXiv   pre-print
In this method, the regularizer is trained based on the Wasserstein Generative Adversarial Network (WGAN) objective function which tries to distinguish the distribution of clean and noisy images.  ...  coping with the limited training data.  ...  Since adversarial regularizers are trained based on image distribution loss, rather than image pixel loss, no paired training data is necessary.  ... 
arXiv:2106.00167v2 fatcat:v7mvvuwdtneuvg34wj3mryrhzm

Advanced Single Image Resolution Upsurging Using A Generative Adversarial Network

Md. Moshiur Rahman, Samrat Kumar Dey, Kabid Hassan Shibly
2020 Signal & Image Processing An International Journal  
In this paper, we have proposed a technique of generating higher resolution images form lower resolution using Residual in Residual Dense Block network architecture with a deep network.  ...  In recent times, various research works are performed to generate a higher resolution of an image from its lower resolution.  ...  different, BN layers limit the generation ability.  ... 
doi:10.5121/sipij.2020.11105 fatcat:cn2cifwd5bcsfezjeopmet47ru

Manifold Regularization for Locally Stable Deep Neural Networks [article]

Charles Jin, Martin Rinard
2020 arXiv   pre-print
We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks.  ...  Our regularizers are based on a sparsification of the graph Laplacian which holds with high probability when the data is sparse in high dimensions, as is common in deep learning.  ...  Applying the regularization term in Equation 2 yields, in the limit, a function which is smooth on the data manifold.  ... 
arXiv:2003.04286v2 fatcat:7v5tuul45vgy7jiljtannwg6um

Adversarial Training is a Form of Data-dependent Operator Norm Regularization [article]

Kevin Roth, Yannic Kilcher, Thomas Hofmann
2020 arXiv   pre-print
We establish a theoretical link between adversarial training and operator norm regularization for deep neural networks.  ...  Specifically, we prove that ℓ_p-norm constrained projected gradient ascent based adversarial training with an ℓ_q-norm loss on the logits of clean and perturbed inputs is equivalent to data-dependent (  ...  Data-dependent Operator Norm Regularization More generally, we can directly regularize the data-dependent pp, qq-operator norm of the Jacobian.  ... 
arXiv:1906.01527v5 fatcat:mgyc74zv4bacjisy35yxkipmyu

An Empirical Study on the Relation between Network Interpretability and Adversarial Robustness [article]

Adam Noack, Isaac Ahern, Dejing Dou, Boyang Li
2020 arXiv   pre-print
Applying the network interpretation technique SmoothGrad yields additional performance gains, especially in cross-norm attacks and under heavy perturbations.  ...  We demonstrate that training the networks to have interpretable gradients improves their robustness to adversarial perturbations.  ...  Minimizing degradation requires the norm of the Jacobian matrix to be small, whereas high generation performance requires the Jacobian to capture data regularities.  ... 
arXiv:1912.03430v6 fatcat:iidb5bqjgvf4vnszt3ruy25eem

An Empirical Study on the Relation Between Network Interpretability and Adversarial Robustness

Adam Noack, Isaac Ahern, Dejing Dou, Boyang Li
2021 SN Computer Science  
Applying the network interpretation technique SmoothGrad [59] yields additional performance gains, especially in cross-norm attacks and under heavy perturbations.  ...  We demonstrate that training the networks to have interpretable gradients improves their robustness to adversarial perturbations.  ...  Data Availability Statement All data and materials we used for our experiments are freely available via PyTorch's torchvision package [50] .  ... 
doi:10.1007/s42979-020-00390-x fatcat:raovh7donna55gu7wlmygfc4qy

Enhancing MR Image Segmentation with Realistic Adversarial Data Augmentation [article]

Chen Chen, Chen Qin, Cheng Ouyang, Zeju Li, Shuo Wang, Huaqi Qiu, Liang Chen, Giacomo Tarroni, Wenjia Bai, Daniel Rueckert
2022 arXiv   pre-print
The proposed adversarial data augmentation does not rely on generative networks and can be used as a plug-in module in general segmentation networks.  ...  To address this challenge, we propose AdvChain, a generic adversarial data augmentation framework, aiming at improving both the diversity and effectiveness of training data for medical image segmentation  ...  Augmenting images with these adversarial transformations contribute to stronger consistency regularization to enforce the network to be invariant under photometric transformations and equivariant under  ... 
arXiv:2108.03429v2 fatcat:m24wykdkbna3fdtq2t5qdlgq2i

Review of Artificial Intelligence Adversarial Attack and Defense Technologies

Shilin Qiu, Qihe Liu, Shijie Zhou, Chunjiang Wu
2019 Applied Sciences  
Finally, we describe the existing adversarial defense methods respectively in three main categories, i.e., modifying data, modifying models and using auxiliary tools.  ...  However, artificial intelligence systems are vulnerable to adversarial attacks, which limit the applications of artificial intelligence (AI) technologies in key security fields.  ...  [8] used a regularization method to limit the vulnerability of data when training an SVM model.  ... 
doi:10.3390/app9050909 fatcat:u4if4uweqzc6tfdrc3kokckkua

Adversarial Perturbations Fool Deepfake Detectors [article]

Apurva Gandhi, Shomik Jain
2020 arXiv   pre-print
The DIP defense removes perturbations using generative convolutional neural networks in an unsupervised manner.  ...  This work uses adversarial perturbations to enhance deepfake images and fool common deepfake detectors.  ...  CODE AVAILABILITY Code and additional architecture details are available at: deepfakes.  ... 
arXiv:2003.10596v2 fatcat:h24oggx2tzd7tnkycyxr23xcaa

Infer-AVAE: An Attribute Inference Model Based on Adversarial Variational Autoencoder [article]

Yadong Zhou, Zhihao Ding, Xiaoming Liu, Chao Shen, Lingling Tong, Xiaohong Guan
2021 arXiv   pre-print
not limited by observations.  ...  data.  ...  National Natural Science Foundation (U1736205, 61833015, U1766215, U1936110, 61902308) , the Fundamental Research Funds for the Central Universities (xzy012019036) , Foundation of Xi'an Jiaotong University under  ... 
arXiv:2012.15005v2 fatcat:rfawvt3vsvbj5lrrnx2wtmidaa

Defending Against Adversarial Attack in ECG Classification with Adversarial Distillation Training [article]

Jiahao Shao, Shijia Geng, Zhaoji Fu, Weilun Xu, Tong Liu, Shenda Hong
2022 arXiv   pre-print
, Jacob regularization, and noise-to-signal ratio regularization.  ...  Deep neural networks (DNNs) can be used to analyze these signals because of their high accuracy rate.  ...  Generally, PGD creates adversarial samples by using multiple iterations and limits the difference between new adversarial samples and those created in the last iteration.  ... 
arXiv:2203.09487v1 fatcat:aqduqnmembg7feirynvkzhgjs4

Multi-Class Triplet Loss With Gaussian Noise for Adversarial Robustness

Benjamin Appiah, Edward Y. Baagyere, Kwabena Owusu-Agyemang, Zhiguang Qin, Muhammed Amin Abdullah
2020 IEEE Access  
on both clean and adversarial data.  ...  The ALP method matches the logits from clean image and it's corresponding adversarial image and provide an extra regularization term for better representation of the data.  ... 
doi:10.1109/access.2020.3024244 fatcat:fu2d7zmpmng53ojr65ximucqv4

Bridging the Gap Between Adversarial Robustness and Optimization Bias [article]

Fartash Faghri, Sven Gowal, Cristina Vasconcelos, David J. Fleet, Fabian Pedregosa, Nicolas Le Roux
2021 arXiv   pre-print
We demonstrate that the choice of optimizer, neural network architecture, and regularizer significantly affect the adversarial robustness of linear neural networks, providing guarantees without the need  ...  for adversarial training.  ...  For linearly separable data, under conditions of Theorem 4.1, the sequence of solutions to regularized classification problems converges in direction to a maximally robust classifier.  ... 
arXiv:2102.08868v2 fatcat:qhty2vogyvdmzaza45yr2xpq4y
« Previous Showing results 1 — 15 out of 43,458 results