Filters








23,902 Hits in 3.4 sec

Smoothed Analysis with Adaptive Adversaries [article]

Nika Haghtalab, Tim Roughgarden, Abhishek Shetty
2021 arXiv   pre-print
We prove novel algorithmic guarantees for several online problems in the smoothed analysis model.  ...  -Dispersion in online optimization: We consider online optimization of piecewise Lipschitz functions where functions with ℓ discontinuities are chosen by a smoothed adaptive adversary and show that the  ...  We introduce a general technique for reducing smoothed analysis with adaptive adversaries to the much simpler setting of oblivious adversaries.  ... 
arXiv:2102.08446v2 fatcat:eq3326qer5aohmxnqmirqc3wcq

Models of Smoothing in Dynamic Networks

Uri Meir, Ami Paz, Gregory Schwartzman, Hagit Attiya
2020 International Symposium on Distributed Computing  
Finally, we study the power of an adaptive adversary, who can choose its actions in accordance with the changes that have occurred so far.  ...  [Distributed Computing, 2018] suggested to use smoothed analysis in order to study dynamic networks.  ...  As smoothed analysis aims to be the middle ground between worst-case and average-case analysis, it is very natural to consider the effect of noise on the strongest possible adversary, i.e., an adaptive  ... 
doi:10.4230/lipics.disc.2020.36 dblp:conf/wdag/MeirPS20 fatcat:s3x3wkno3bgfxbhrib2u3z5a7e

Models of Smoothing in Dynamic Networks [article]

Uri Meir, Ami Paz, Gregory Schwartzman
2020 arXiv   pre-print
Finally, we study the power of an adaptive adversary, who can choose its actions in accordance with the changes that have occurred so far.  ...  ~[Distributed Computing, 2018] suggested to use smoothed analysis in order to study dynamic networks.  ...  Adapting the ideas of smoothed analysis in this setting is not an easy task.  ... 
arXiv:2009.13006v1 fatcat:76fpix7txfbqbhf3do5gskjkw4

Improving Calibration through the Relationship with Adversarial Robustness [article]

Yao Qin, Xuezhi Wang, Alex Beutel, Ed H. Chi
2021 arXiv   pre-print
To this end, we propose Adversarial Robustness based Adaptive Label Smoothing (AR-AdaLS) that integrates the correlations of adversarial robustness and calibration into training by adaptively softening  ...  Neural networks lack adversarial robustness, i.e., they are vulnerable to adversarial examples that through small perturbations to inputs cause incorrect predictions.  ...  Our work makes label smoothing adaptive and incorporates the correlation with adversarial robustness to further improve calibration.  ... 
arXiv:2006.16375v2 fatcat:6iu6ye6znvdwdcm3vibey6u5qi

An Adaptive View of Adversarial Robustness from Test-time Smoothing Defense [article]

Chao Tang, Yifei Fan, Anthony Yezzi
2019 arXiv   pre-print
In this paper, we present an adaptive view of the issue via evaluating various test-time smoothing defense against white-box untargeted adversarial examples.  ...  Through controlled experiments with pretrained ResNet-152 on ImageNet, we first illustrate the non-monotonic relation between adversarial attacks and smoothing defenses.  ...  We hope this paper will stimulate more detailed analysis on adversarial examples at a finer scale.  ... 
arXiv:1911.11881v1 fatcat:vsba6ajc6zdjxj47neknbnmqvu

CAT: Customized Adversarial Training for Improved Robustness [article]

Minhao Cheng, Qi Lei, Pin-Yu Chen, Inderjit Dhillon, Cho-Jui Hsieh
2020 arXiv   pre-print
In this paper, we propose a new algorithm, named Customized Adversarial Training (CAT), which adaptively customizes the perturbation level and the corresponding label for each training sample in adversarial  ...  We show that the proposed algorithm achieves better clean and robust accuracy than previous adversarial training methods through extensive experiments.  ...  In Figure 4b , Adp train denotes the original adversarial training, Adv+LS denotes adversarial training with label smoothing (setting y by Eq (4)), Adp-Adv denotes adversarial training with adaptive instance-wise  ... 
arXiv:2002.06789v1 fatcat:ob3tobxm6fcj3bj6jpslihbprq

A Closer Look at Smoothness in Domain Adversarial Training [article]

Harsh Rangwani, Sumukh K Aithal, Mayank Mishra, Arihant Jain, R. Venkatesh Babu
2022 arXiv   pre-print
We find that converging to a smooth minima with respect to (w.r.t.) task loss stabilizes the adversarial training leading to better performance on target domain.  ...  In contrast to task loss, our analysis shows that converging to smooth minima w.r.t. adversarial loss leads to sub-optimal generalization on the target domain.  ...  Analysis of Smoothness In this section, we analyze the curvature properties of the task loss with respect to the parameters (θ).  ... 
arXiv:2206.08213v1 fatcat:ev7fubbo6nepzo4lar47pjdzcq

Smoothed Analysis of Online and Differentially Private Learning [article]

Nika Haghtalab, Tim Roughgarden, Abhishek Shetty
2020 arXiv   pre-print
We show that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries.  ...  In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature.  ...  Putting these all together we get the following regret bound against smoothed adaptive adversaries. Theorem 3.3 (Adaptive Adversary).  ... 
arXiv:2006.10129v1 fatcat:opnu7y2qznbynol7epph65tgp4

Study of Pre-processing Defenses against Adversarial Attacks on State-of-the-art Speaker Recognition Systems [article]

Sonal Joshi, Jesús Villalba, Piotr Żelasko, Laureano Moro-Velázquez, Najim Dehak
2021 arXiv   pre-print
Among the proposed pre-processing defenses, PWG combined with randomized smoothing offers the most protection against the attacks, with accuracy averaging 93 undefended system and an absolute improvement  ...  It evaluates them against powerful adaptive white-box adversarial attacks, i.e., when the adversary has full knowledge of the system, including the defense.  ...  This paper focuses on adversarial attacks against speaker recognition systems and proposes four pre-processing defenses with the following major contributions: • We show extensive analysis of adversarial  ... 
arXiv:2101.08909v2 fatcat:uj3jzji42bcdthjq23hdlc57a4

Policy Smoothing for Provably Robust Reinforcement Learning [article]

Aounon Kumar, Alexander Levine, Soheil Feizi
2022 arXiv   pre-print
Our main theoretical contribution is to prove an adaptive version of the Neyman-Pearson Lemma -- a key lemma for smoothing-based certificates -- where the adversarial perturbation at a particular time  ...  However, DNNs have been used extensively in real-world adaptive tasks such as reinforcement learning (RL), making such systems vulnerable to adversarial attacks as well.  ...  The above analysis can be adapted to obtain an upper-bound on P[h(Y ) = 1] of Φ(Φ −1 (p) + B/σ).  ... 
arXiv:2106.11420v3 fatcat:toalxmperncqbi4sswsrmkkpqu

ScaLA: Accelerating Adaptation of Pre-Trained Transformer-Based Language Models via Efficient Large-Batch Adversarial Noise [article]

Minjia Zhang, Niranjan Uma Naresh, Yuxiong He
2022 arXiv   pre-print
Finally, we also address the theoretical aspect of large-batch optimization with adversarial noise and provide a theoretical convergence rate analysis for ScaLA using techniques for analyzing non-convex  ...  Different from prior methods, we take a sequential game-theoretic approach by adding lightweight adversarial noise into large-batch optimization, which significantly improves adaptation speed while preserving  ...  This section first provides an analysis of the computational cost and then describes two approaches to reduce the time spent in generating adversarial noise, thereby further reducing the overall adaptation  ... 
arXiv:2201.12469v1 fatcat:i4t36adgdjf7lk7w4zpsyr2qky

Towards Bridging the gap between Empirical and Certified Robustness against Adversarial Examples [article]

Jay Nandy and Sudipan Saha and Wynne Hsu and Mong Li Lee and Xiao Xiang Zhu
2022 arXiv   pre-print
Our proposed Certification through Adaptation with Auto-Noise technique achieves an average certified radius (ACR) scores up to 1.102 and 1.148 respectively for CIFAR-10 and ImageNet datasets using AT  ...  In this paper, we propose a novel method, called Certification through Adaptation, that transforms an AT model into a randomized smoothing classifier during inference to provide certified robustness for  ...  using adaptive BN technique with an appropriate/ pre-selected level of Gaussian noise, σ to obtain the base classifier, f adapt for certification using randomized smoothing.  ... 
arXiv:2102.05096v3 fatcat:fof5jn57djfvtb2j32tgsdtsrm

Oracle-Efficient Online Learning for Beyond Worst-Case Adversaries [article]

Nika Haghtalab, Yanjun Han, Abhishek Shetty, Kunhe Yang
2022 arXiv   pre-print
For the smoothed analysis setting, our results give the first oracle-efficient algorithm for online learning with smoothed adversaries [HRS21].  ...  First, the smoothed analysis setting of [RST11, HRS21] where an adversary is constrained to generating samples from distributions whose density is upper bounded by 1/σ times the uniform density.  ...  Smoothed Online Learning We work with the smoothed adaptive online adversarial setting from [HRS21].  ... 
arXiv:2202.08549v2 fatcat:vsywl6xlo5ephcv5ulz4a2dtr4

Average-Case and Smoothed Competitive Analysis of the Multilevel Feedback Algorithm

Luca Becchetti, Stefano Leonardi, Alberto Marchetti-Spaccamela, Guido Schäfer, Tjark Vredeveld
2006 Mathematics of Operations Research  
In this paper, we introduce the notion of smoothed competitive analysis of online algorithms.  ...  Smoothed analysis has been proposed by Spielman and Teng [25] to explain the behavior of algorithms that work well in practice while performing very poorly from a worst-case analysis point of view.  ...  Our analysis holds both for the oblivious adversary and for the adaptive adversary.  ... 
doi:10.1287/moor.1050.0170 fatcat:4df3qel33bhy7hdcvxuwmtksyu

Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness [article]

Yilun Jin, Lixin Fan, Kam Woh Ng, Ce Ju, Qiang Yang
2020 arXiv   pre-print
While adversarial training (AT) is regarded as the most robust defense, it suffers from poor performance both on clean examples and under other types of attacks, e.g. attacks with larger perturbations.  ...  Deep neural networks (DNNs) are known to be prone to adversarial attacks, for which many remedies are proposed.  ...  Adaptive Attacks. We perform adaptive attacks on PAT-EntM. Due to its connection with label smoothing, we leverage CE with smoothed labels in Eqn. 6 as the loss function L atk in Eqn. 1.  ... 
arXiv:2011.13538v1 fatcat:qnch4hmdcbgldldc5oz2aajbgi
« Previous Showing results 1 — 15 out of 23,902 results