Filters








1,046 Hits in 3.9 sec

Adversarial Weight Perturbation Helps Robust Generalization [article]

Dongxian Wu, Shu-tao Xia, Yisen Wang
2020 arXiv   pre-print
In this paper, we investigate the weight loss landscape from a new perspective, and identify a clear correlation between the flatness of weight loss landscape and robust generalization gap.  ...  Among them, adversarial training is the most promising one, which flattens the input loss landscape (loss change with respect to input) via training on adversarially perturbed examples.  ...  Amongst them, the geometry of optimization minimum (i.e., sharpness/flatness or loss landscape) is the most intuitive.  ... 
arXiv:2004.05884v2 fatcat:6izloyy2sbcjbiyxk43aqrzoi4

Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness [article]

Guillermo Ortiz-Jimenez, Apostolos Modas, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
2021 arXiv   pre-print
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks, and eventually explore how the geometric study of adversarial examples can serve as a powerful  ...  The goal of this article is to provide readers with a set of new perspectives to understand deep learning, and to supply them with intuitive tools and insights on how to use adversarial robustness to improve  ...  To capture this geometry, it is useful to think in terms of the loss landscape 4 in the input space induced by the classifier f θ .  ... 
arXiv:2010.09624v2 fatcat:mvhosdtxgzcytel75h4foaxqqu

Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses [article]

Fu Lin, Rohit Mittapalli, Prithvijit Chattopadhyay, Daniel Bolya, Judy Hoffman
2020 arXiv   pre-print
We further explore directly regularizing towards a flat landscape for adversarial robustness.  ...  In this work, we investigate the potential effect defense techniques have on the geometry of the likelihood landscape - likelihood of the input images under the trained model.  ...  Interpreting robustness via geometry of loss landscapes.  ... 
arXiv:2008.11300v1 fatcat:3tyiqsishva5np4sdoteo6gbie

Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples [article]

Marc Khoury
2020 arXiv   pre-print
Then we fully characterize the geometry of the loss landscape of L_2-adversarial training in least-squares linear regression.  ...  The geometry of the loss landscape is subtle and has important consequences for optimization algorithms.  ...  • We fully characterize the geometry of the loss landscape of L 2 -adversarial training in least-squares linear regression.  ... 
arXiv:1911.03784v2 fatcat:ksr6efecgzc2thaav4kfrcs6iu

CAP: Co-Adversarial Perturbation on Weights and Features for Improving Generalization of Graph Neural Networks [article]

Haotian Xue, Kaixiong Zhou, Tianlong Chen, Kai Guo, Xia Hu, Yi Chang, Xin Wang
2021 arXiv   pre-print
the weight and feature loss landscapes alternately.  ...  Adversarial training, which augments data with the worst-case adversarial examples, has been widely demonstrated to improve model's robustness against adversarial attacks and generalization ability.  ...  LOSS LANDSCAPE ANALYSIS In this section, we start by introducing the notations and GNNs, and then analyze the weight and feature loss landscapes, which lead to deep understanding of the generalization  ... 
arXiv:2110.14855v1 fatcat:uz7bga5t5vhabfzcsw7ah24x4y

When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations [article]

Xiangning Chen, Cho-Jui Hsieh, Boqing Gong
2022 arXiv   pre-print
Hence, this paper investigates ViTs and MLP-Mixers from the lens of loss geometry, intending to improve the models' data efficiency at training and generalization at inference.  ...  By promoting smoothness with a recently proposed sharpness-aware optimizer, we substantially improve the accuracy and robustness of ViTs and MLP-Mixers on various tasks spanning supervised, adversarial  ...  The instructions for plotting the landscape and the attention map are detailed in  ... 
arXiv:2106.01548v3 fatcat:jhfb2evd2vca3gyen2hqyxbllu

Hessian-based Analysis of Large Batch Training and Robustness to Adversaries [article]

Zhewei Yao, Amir Gholami, Qi Lei, Kurt Keutzer, Michael W. Mahoney
2018 arXiv   pre-print
Here, we study large batch size training through the lens of the Hessian operator and robust optimization.  ...  Furthermore, we show that robust training allows one to favor flat areas, as points with large Hessian spectrum show poor robustness to adversarial perturbation.  ...  The main question here is how the landscape of the loss functional is changed after these robust optimizations are performed?  ... 
arXiv:1802.08241v4 fatcat:3nm2h5bv3vcolfg5uygcpjca6m

Geometric algorithms for predicting resilience and recovering damage in neural networks [article]

Guruprasad Raghavan, Jiayi Li, Matt Thomson
2020 arXiv   pre-print
In this paper, we establish a mathematical framework to analyze the resilience of artificial neural networks through the lens of differential geometry.  ...  To survive damage, biological network architectures have both intrinsic resilience to component loss and also activate recovery programs that adjust network weights through plasticity to stabilize performance  ...  (E) A depiction of multiple recovery paths on the loss landscape from trained network (N1) to networks on the damage hyper-plane (N2, N3, N4, N5).  ... 
arXiv:2005.11603v2 fatcat:dltogxhhczeslcqiccrw2nmjxi

Understanding and Increasing Efficiency of Frank-Wolfe Adversarial Training [article]

Theodoros Tsiligkaridis, Jay Roberts
2022 arXiv   pre-print
We develop a theoretical framework for adversarial training with FW optimization (FW-AT) that reveals a geometric connection between the loss landscape and the ℓ_2 distortion of ℓ_∞ FW attacks.  ...  Adversarial Training (AT) is a technique that approximately solves a robust optimization problem to minimize the worst-case loss and is widely regarded as the most effective defense.  ...  A Appendix A.1 Loss Landscape It has been shown experimentally that AT robust models and geometric regularization methods that increase adversarial robustness, have more regular loss landscapes than  ... 
arXiv:2012.12368v5 fatcat:jkn2uctgr5falnsg62qylfoqsq

Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [article]

Pu Zhao, Pin-Yu Chen, Payel Das, Karthikeyan Natesan Ramamurthy, Xue Lin
2020 arXiv   pre-print
In this work, we propose to employ mode connectivity in loss landscapes to study the adversarial robustness of deep neural networks, and provide novel methods for improving this robustness.  ...  We also use mode connectivity to investigate the loss landscapes of regular and robust models against evasion attacks.  ...  CONCLUSION This paper provides novel insights on adversarial robustness of deep neural networks through the lens of mode connectivity in loss landscapes.  ... 
arXiv:2005.00060v2 fatcat:dk63su5vsjgbtbymuofhjiwy2a

The Dilemma Between Data Transformations and Adversarial Robustness for Time Series Application Systems [article]

Sheila Alemany, Niki Pissinou
2021 arXiv   pre-print
For example, we have seen this through dimensionality reduction techniques used to aid with the generalization of features in machine learning applications.  ...  Adversarial examples, or nearly indistinguishable inputs created by an attacker, significantly reduce machine learning accuracy.  ...  The information loss resulted on the higher relative end of codimension and the one of most efficient creation of adversarial examples with a 60.98% decrease in robustness at = 1.0.  ... 
arXiv:2006.10885v2 fatcat:i5zxkx3fcbeqnel42pfwjlr6aa

Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation [article]

Dohun Lim, Hyeonseok Lee, Sungchan Kim
2021 arXiv   pre-print
Our method is built on top of the assumption of smooth landscape in a loss function of the model prediction: locally consistent loss and gradient profile.  ...  Extensive experiments support the analysis results, revealing that the proposed saliency maps retrieve the original classes of adversarial examples crafted against both naturally and adversarially trained  ...  Adversarial robustness through local linearization.  ... 
arXiv:2103.14332v2 fatcat:od2a3lc7rzdnlpxjantnhzaspq

High Dimensional Spaces, Deep Learning and Adversarial Examples [article]

Simant Dube
2018 arXiv   pre-print
Second, we look at optimization landscapes of deep neural networks and examine the number of saddle points relative to that of local minima.  ...  improvements can be made and how adversarial examples can be eliminated.  ...  Unrecognizable Adversarial Examples. Besides power spectrum properties we can apply the Manifold Learning Hypothesis to understand geometry of image manifolds.  ... 
arXiv:1801.00634v5 fatcat:asephmeud5grfjyl5ql53g6zma

Towards Adversarial Robustness via Feature Matching

Zhuorong Li, Chao Feng, Jianwei Zheng, Minghui Wu, Hongchuan Yu
2020 IEEE Access  
Adversarial training is one of the most effective defenses for improving the robustness of classifiers. We introduce an enhanced adversarial training approach in this work.  ...  Further evaluations on CIFAR-100 also show our potential for a desirable boost in adversarial robustness for deep neural networks.  ...  ALP further enforces a loss term for better understanding of data.  ... 
doi:10.1109/access.2020.2993304 fatcat:g4eofxhbovddpjaf3dtnneh3pi

Why is unsupervised alignment of English embeddings from different algorithms so hard? [article]

Mareike Hartmann and Yova Kementchedjhieva and Anders Søgaard
2018 arXiv   pre-print
We believe understanding why, is key to understand both modern word embedding algorithms and the limitations and instability dynamics of GANs.  ...  This paper presents a challenge to the community: Generative adversarial networks (GANs) can perfectly align independent English word embeddings induced using the same algorithm, based on distributional  ...  Understanding when biases induce highly non-convex landscapes, and how to make adversarial training less sensitive to such scenarios, remains an open problem, which we think will be key to the success  ... 
arXiv:1809.00150v1 fatcat:vnp53rclozba3b7kvcgtnzkhj4
« Previous Showing results 1 — 15 out of 1,046 results