Filters








7,423 Hits in 4.9 sec

Exploring Memorization in Adversarial Training [article]

Yinpeng Dong, Ke Xu, Xiao Yang, Tianyu Pang, Zhijie Deng, Hang Su, Jun Zhu
2022 arXiv   pre-print
In this paper, we explore the memorization effect in adversarial training (AT) for promoting a deeper understanding of model capacity, convergence, generalization, and especially robust overfitting of  ...  the adversarially trained models.  ...  In this paper, we explore the memorization behavior for a different learning algorithm-adversarial training (AT).  ... 
arXiv:2106.01606v2 fatcat:gb22ve35m5fhhpggc4lmp2tqx4

A Closer Look at Memorization in Deep Networks [article]

Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, Simon Lacoste-Julien
2017 arXiv   pre-print
We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness.  ...  because training data itself plays an important role in determining the degree of memorization.  ...  They explore several promising applications for this technique, including generation of adversarial training examples.  ... 
arXiv:1706.05394v2 fatcat:ltnngvtq2renfcdk76yradnfha

A Research Agenda: Dynamic Models to Defend Against Correlated Attacks [article]

Ian Goodfellow
2019 arXiv   pre-print
In this article I describe a research agenda for securing machine learning models against adversarial inputs at test time.  ...  So far most research in this direction has focused on an adversary who violates the identical assumption, and imposes some kind of restricted worst-case distribution shift.  ...  This is just one example of an abstention criterion different from the memorization criterion. Other researchers are also exploring abstention mechanisms in other contexts (Carlini & Wagner, 2017; ?  ... 
arXiv:1903.06293v1 fatcat:sdk6ted4yfd5fgqz3g4bojkuq4

Survey: Leakage and Privacy at Inference Time [article]

Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick Murray-Smith, Sotirios A. Tsaftaris
2021 arXiv   pre-print
We first discuss what leakage is in the context of different data, tasks, and model architectures.  ...  Although in theory most privacy attacks can be used in some way during the training of M θ as potential adversaries to defend against, in practice this setting has been mostly explored for MIAs. with adversarial  ...  Measuring memorization In order to detect and prevent (or exploit) the memorization effect in trained models, one would need to first estimate it using one of the existing mechanisms.  ... 
arXiv:2107.01614v1 fatcat:76a724yzkjfvjisrokssl6assa

Active Data Pattern Extraction Attacks on Generative Language Models [article]

Bargav Jayaraman, Esha Ghosh, Huseyin Inan, Melissa Chase, Sambuddha Roy, Wei Dai
2022 arXiv   pre-print
In this work, we set out to investigate potential information leakage vulnerabilities in a typical Smart Reply pipeline and show that it is possible for an adversary, having black-box or gray-box access  ...  to a Smart Reply model, to extract sensitive user information present in the training data.  ...  In Proceedings of the 2016 ACM Con-  ... 
arXiv:2207.10802v1 fatcat:eq4wfmrpbjeshnjxqedpzujpom

Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized Neural Networks [article]

Zihang Zou, Boqing Gong, Liqiang Wang
2022 arXiv   pre-print
We study protecting a user's data (images in this work) against a learner's unauthorized use in training neural networks.  ...  To our best knowledge, this work is the first to protect an individual user's data ownership from unauthorized use in training neural networks.  ...  The σ 2 is set to be 0.1 in our experiments. 6. Adversarial Training [26] .  ... 
arXiv:2109.09023v2 fatcat:iypftfkrabghhgdi6dptly5wpq

What do we Really Know about State of the Art NER? [article]

Sowmya Vajjala, Ramya Balasubramaniam
2022 arXiv   pre-print
Additionally, we generate six new adversarial test sets through small perturbations in the original test set, replacing select entities while retaining the context.  ...  We also train and test our models on randomly generated train/dev/test splits followed by an experiment where the models are trained on a select set of genres but tested genres not seen in training.  ...  Clearly, the models seem to be memorizing some of the entities that appear both in training and test set.  ... 
arXiv:2205.00034v2 fatcat:c6lvccnkobgubesa32vdlnriee

Detecting Overfitting of Deep Generative Networks via Latent Recovery

Ryan Webster, Julien Rabin, Loic Simon, Frederic Jurie
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Using this methodology, we show that pure GAN models appear to generalize well, in contrast with those using hybrid adversarial losses, which are amongst the most widely applied generative methods.  ...  State of the art deep generative networks have achieved such realism that they can be suspected of memorizing training images.  ...  In the seminal works of [29, 36] , a similar inversion of deep nets unveiled adversarial examples.  ... 
doi:10.1109/cvpr.2019.01153 dblp:conf/cvpr/WebsterRSJ19 fatcat:susfzplfgzf2dfahrv6iflv2we

High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks

Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
robustness and accuracy, and some evidence in understanding training heuristics.  ...  Thus the observation leads to multiple hypotheses that are related to the generalization behaviors of CNN, including a potential explanation for adversarial examples, a discussion of CNN's trade-off between  ...  learn the generalizable patterns out of the data, in contrast to directly memorizing everything to reduce the training loss?"  ... 
doi:10.1109/cvpr42600.2020.00871 dblp:conf/cvpr/WangWHX20 fatcat:35l67fcx3nhnpcdvls4jzlosme

High Frequency Component Helps Explain the Generalization of Convolutional Neural Networks [article]

Haohan Wang, Xindi Wu, Zeyi Huang, Eric P. Xing
2020 arXiv   pre-print
robustness and accuracy, and some evidence in understanding training heuristics.  ...  Thus the observation leads to multiple hypotheses that are related to the generalization behaviors of CNN, including a potential explanation for adversarial examples, a discussion of CNN's trade-off between  ...  Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Institutes of Health or the National  ... 
arXiv:1905.13545v3 fatcat:wif6wwytyzfgxbwzbg3bi7vjsi

Generating Memorable Images Based on Human Visual Memory Schemas [article]

Cameron Kyle-Davidson, Adrian G. Bors, Karla K. Evans
2020 arXiv   pre-print
This research study proposes using Generative Adversarial Networks (GAN) that incorporate a two-dimensional measure of human memorability to generate memorable or non-memorable images of scenes.  ...  We assess the difference in memorability between images generated to be memorable or non-memorable through an independent computational measure of memorability, and additionally assess the effect of memorability  ...  These results explore the generated image memorability space, showing a smooth exploration of the manifold.  ... 
arXiv:2005.02969v1 fatcat:3zn2yeawqvd4zastc4mqmlibli

Excess Capacity and Backdoor Poisoning [article]

Naren Sarayu Manoj, Avrim Blum
2021 arXiv   pre-print
From a computational standpoint, we show that under certain assumptions, adversarial training can detect the presence of backdoors in a training set.  ...  A backdoor data poisoning attack is an adversarial attack wherein the attacker injects several watermarked, mislabeled training examples into a training set.  ...  NSM thanks Surbhi Goel for suggesting the experiments run in the paper.  ... 
arXiv:2109.00685v3 fatcat:52uis4hgmjfrvig32msgzltekq

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks [article]

Alec Radford, Luke Metz, Soumith Chintala
2016 arXiv   pre-print
Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and  ...  In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.  ...  Finally, we'd like to thank Nvidia for donating a Titan-X GPU used in this work.  ... 
arXiv:1511.06434v2 fatcat:7usp77o5xffzxlz3pir2w75oyy

Privacy Regularization: Joint Privacy-Utility Optimization in Language Models [article]

Fatemehsadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Rühle, Taylor Berg-Kirkpatrick, Robert Sim
2021 arXiv   pre-print
Neural language models are known to have a high capacity for memorization of training samples.  ...  Differential privacy (DP), a popular choice to train models with privacy guarantees, comes with significant costs in terms of utility degradation and disparate impact on subgroups of users.  ...  We, on the other hand, use adversarial training and a triplet-based regularization to train private language models that do not memorize sensitive user information, which has not been explored before.  ... 
arXiv:2103.07567v2 fatcat:kma3aa7wejdobom2ef5yhkn4he

mixup: Beyond Empirical Risk Minimization [article]

Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz
2018 arXiv   pre-print
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.  ...  Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples.  ...  Furthermore, mixup helps to combat memorization of corrupt labels, sensitivity to adversarial examples, and instability in adversarial training.  ... 
arXiv:1710.09412v2 fatcat:zpwavulzc5gbbea4g2xlgbutnu
« Previous Showing results 1 — 15 out of 7,423 results