Filters








15,576 Hits in 5.2 sec

Convergence and Margin of Adversarial Training on Separable Data [article]

Zachary Charles, Shashank Rajput, Stephen Wright, Dimitris Papailiopoulos
2019 arXiv   pre-print
This work analyzes the performance of adversarial training on linearly separable data, and provides bounds on the number of iterations required for large margin.  ...  To encourage robustness, it iteratively computes adversarial examples for the model, and then re-trains on these examples via some update rule.  ...  Conclusion In this paper, we analyzed adversarial training on separable data.  ... 
arXiv:1905.09209v1 fatcat:3ng44eqpozhkvan7saspl5n6di

Robustifying Binary Classification to Adversarial Perturbation [article]

Fariborz Salehi, Babak Hassibi
2020 arXiv   pre-print
takes into account the power of the adversary in manipulating the data.  ...  Under some mild assumptions on the loss function, we theoretically show that the gradient descent iterates (with sufficiently small step size) converge to the RM classifier in its direction.  ...  We consider the case where the training data is perturbed by an adversary and introduce the "Robust Max-margin" (RM) classifier as a generalization of max-margin to perturbed input data.  ... 
arXiv:2010.15391v1 fatcat:yqt2ldeibvc57holegltigca6e

Bridging the Gap Between Adversarial Robustness and Optimization Bias [article]

Fartash Faghri, Sven Gowal, Cristina Vasconcelos, David J. Fleet, Fabian Pedregosa, Nicolas Le Roux
2021 arXiv   pre-print
for adversarial training.  ...  We evaluate Fourier-ℓ_∞ robustness of adversarially-trained deep CIFAR-10 models from the standard RobustBench benchmark and visualize adversarial perturbations.  ...  Acknowledgements The authors would like to thank Nicholas Carlini, Nicolas Papernot, and Courtney Paquette for helpful discussions and invaluable feedback.  ... 
arXiv:2102.08868v2 fatcat:qhty2vogyvdmzaza45yr2xpq4y

Robust Large-Margin Learning in Hyperbolic Space [article]

Melanie Weber, Manzil Zaheer, Ankit Singh Rawat, Aditya Menon, Sanjiv Kumar
2020 arXiv   pre-print
We then provide an algorithm to efficiently learn a large-margin hyperplane, relying on the careful injection of adversarial examples.  ...  Specifically, we consider the problem of learning a large-margin classifier for data possessing a hierarchical structure.  ...  To this end, we present a hyperbolic version of the classic perceptron algorithm and establish that it will converge on data that is separable with a margin.  ... 
arXiv:2004.05465v2 fatcat:pgv53zy4hng5jpk52rthz27lmq

Understanding the Decision Boundary of Deep Neural Networks: An Empirical Study [article]

David Mickisch, Felix Assion, Florens Greßner, Wiebke Günther, Mariele Motta
2020 arXiv   pre-print
Therefore, we study the minimum distance of data points to the decision boundary and how this margin evolves over the training of a deep neural network.  ...  On the other hand, adversarial training appears to have the potential to prevent this undesired convergence of the decision boundary.  ...  Figure 8 . 8 (a) The train and test errors also converge quickly to near optimal performance for adversarial training on MNIST.  ... 
arXiv:2002.01810v1 fatcat:j7s6zxnb6rdppaflgvfx2t62w4

Hold me tight! Influence of discriminative features on deep network boundaries [article]

Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
2020 arXiv   pre-print
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.  ...  In this work, we borrow tools from the field of adversarial robustness, and propose a new perspective that relates dataset features to the distance of samples to the decision boundary.  ...  Acknowledgments We thank Maksym Andriushchenko, and Evangelos Alexiou for their fruitful discussions and feedback.  ... 
arXiv:2002.06349v4 fatcat:ddin2djtv5b27hd34dviiilx3e

Adversarial Reprogramming Revisited [article]

Matthias Englert, Ranko Lazic
2022 arXiv   pre-print
We also substantially strengthen a recent result of Phuong and Lampert on directional convergence of gradient flow, and obtain as a corollary that training two-layer ReLU neural networks on orthogonally  ...  separable datasets can cause their adversarial reprogramming to fail.  ...  from small initialisations, and when trained with logistic loss on symmetric linearly separable data, two-layer networks with the leaky ReLU activation converge to a globally maximum-margin linear classifier  ... 
arXiv:2206.03466v1 fatcat:trujcde45vd73lg5xcbqv67k34

Cross-Entropy Loss and Low-Rank Features Have Responsibility for Adversarial Examples [article]

Kamil Nar, Orhan Ocal, S. Shankar Sastry, Kannan Ramchandran
2019 arXiv   pre-print
State-of-the-art neural networks are vulnerable to adversarial examples; they can easily misclassify inputs that are imperceptibly different than their training and test data.  ...  In this work, we establish that the use of cross-entropy loss function and the low-rank features of the training data have responsibility for the existence of these inputs.  ...  adversarial example for most of the training and test data. 5.  ... 
arXiv:1901.08360v1 fatcat:ukuqs3iyczcqfajgfryxkaheiy

Improving Adversarial Robustness of CNNs via Maximum Margin

Jiaping Wu, Zhaoqiang Xia, Xiaoyi Feng
2022 Applied Sciences  
From the perspective of margin, the adversarial examples are the clean examples perturbed in the margin direction and adversarial training (AT) is equivalent to a data augmentation method that moves the  ...  In addition, we select examples close to the decision boundary through the SVM auxiliary classifier and train only on these more important examples.  ...  Adversarial training (AT) [16] is one of the most successful empirical defense methods at present which is a data augmentation technique for training models on both natural and adversarial examples.  ... 
doi:10.3390/app12157927 fatcat:le46n7gu4rdg3mlvjt64ybbxdq

MAGAN: Margin Adaptation for Generative Adversarial Networks [article]

Ruohan Wang, Antoine Cully, Hyung Jin Chang, Yiannis Demiris
2017 arXiv   pre-print
Evaluated on the task of unsupervised image generation, the proposed training procedure is simple yet robust on a diverse set of data, and achieves qualitative and quantitative improvements compared to  ...  We propose the Margin Adaptation for Generative Adversarial Networks (MAGANs) algorithm, a novel training procedure for GANs to improve stability and performance by using an adaptive hinge loss function  ...  Finding an appropriate margin value is crucial for successful training and is dependent on both architecture choice and data complexity [8] .  ... 
arXiv:1704.03817v3 fatcat:j5g2kv4dwnahxdr5npmskvv2si

Bad Global Minima Exist and SGD Can Reach Them [article]

Shengchao Liu, Dimitris Papailiopoulos, Dimitris Achlioptas
2021 arXiv   pre-print
We find that if we do not regularize explicitly, then SGD can be easily made to converge to poorly-generalizing, high-complexity models: all it takes is to first train on a random labeling on the data,  ...  The consensus explanation that has emerged credits the randomized nature of SGD for the bias of the training process towards low-complexity models and, thus, for implicit regularization.  ...  Acknowledgements Dimitris Papailiopoulos is supported by an NSF CAREER Award #1844951, two Sony Faculty Innovation Awards, an AFOSR & AFRL Center of Excellence Award FA9550-18-1-0166, and an NSF TRIPODS  ... 
arXiv:1906.02613v2 fatcat:bh3pgrt3jvddxhknmreimoipl4

Implicit Bias of Gradient Descent based Adversarial Training on Separable Data

Yan Li, Ethan X. Fang, Huan Xu, Tuo Zhao
2020 International Conference on Learning Representations  
converges in direction to the maximum 2 -norm margin classifier at the rate of O(1/ √ T ), significantly faster than the rate O (1/ log T ) of training with clean data.  ...  In this paper, we provide new theoretical insights of gradient descent based adversarial training by studying its computational properties, specifically on its implicit bias.  ...  margin hyperplane (i.e., standard SVM) of the training data.  ... 
dblp:conf/iclr/LiFXZ20 fatcat:f4f4ktmthfhm7dp5enqpdirgoi

Interpolation can hurt robust generalization even when there is no noise [article]

Konstantin Donhauser, Alexandru Ţifrea, Michael Aerni, Reinhard Heckel, Fanny Yang
2021 arXiv   pre-print
We prove this phenomenon for the robust risk of both linear regression and classification and hence provide the first theoretical result on robust overfitting.  ...  Numerous recent works show that overparameterization implicitly reduces variance for min-norm interpolators and max-margin classifiers.  ...  Furthermore, the results of [31] show that gradient descent on robustly separable data converges to the robust max-2 -margin estimator (8) .  ... 
arXiv:2108.02883v2 fatcat:xeamcarrwfed3pfh2enaauys4i

Learning Independent Features with Adversarial Nets for Non-linear ICA [article]

Philemon Brakel, Yoshua Bengio
2017 arXiv   pre-print
We also propose two methods for obtaining samples from the product of the marginals using either a simple resampling trick or a separate parametric distribution.  ...  These objectives compare samples from the joint distribution and the product of the marginals without the need to compute any probability densities.  ...  ACKNOWLEDGMENTS The authors thank the CHISTERA project M2CR (PCIN-2015-226), Samsung Institute of Advanced Techonology and CIFAR for their financial support.  ... 
arXiv:1710.05050v1 fatcat:fpiu5sxmcradzl2ctemurcxt3y

Training Efficiency and Robustness in Deep Learning [article]

Fartash Faghri
2021 arXiv   pre-print
In the context of learning visual-semantic embeddings, we find that prioritizing learning on more informative training data increases convergence speed and improves generalization performance on test data  ...  Finally, we study adversarial robustness in deep learning and approaches to achieve maximal adversarial robustness without training with additional data.  ...  Moreover, we hypothesize that prioritizing learning on more informative training data increases convergence speed and improves generalization performance on test data.  ... 
arXiv:2112.01423v1 fatcat:3yqco7htnjdbng4hx2ilkrnkaq
« Previous Showing results 1 — 15 out of 15,576 results