Filters








580 Hits in 5.1 sec

Stochastic Activation Pruning for Robust Adversarial Defense [article]

Guneet S. Dhillon, Kamyar Azizzadenesheli, Zachary C. Lipton, Jeremy Bernstein, Jean Kossaifi, Aran Khanna, Anima Anandkumar
2018 arXiv   pre-print
In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense.  ...  In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy.  ...  STOCHASTIC ACTIVATION PRUNING Consider the defense problem from a game-theoretic perspective (Osborne & Rubinstein, 1994) .  ... 
arXiv:1803.01442v1 fatcat:ry5ezy25dzeedkbxf4tgzjvfty

Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses [article]

Xiao Wang, Siyue Wang, Pin-Yu Chen, Yanzhi Wang, Brian Kulis, Xue Lin, Peter Chin
2019 arXiv   pre-print
Among different defense proposals, stochastic network defenses such as random neuron activation pruning or random perturbation to layer inputs are shown to be promising for attack mitigation.  ...  This paper is motivated by pursuing for a better trade-off between adversarial robustness and test accuracy for stochastic network defenses.  ...  Stochastic Activation Pruning (SAP). Stochastic activation pruning (SAP), proposed by Dhillon et al.  ... 
arXiv:1908.07116v1 fatcat:2hoia3hwznbjhntrfunh5ja6fy

Defensive dropout for hardening deep neural networks under adversarial attacks

Siyue Wang, Xiao Wang, Pu Zhao, Wujie Wen, David Kaeli, Peter Chin, Xue Lin
2018 Proceedings of the International Conference on Computer-Aided Design - ICCAD '18  
Comparing with stochastic activation pruning (SAP), another defense method through introducing randomness into the DNN model, we find that our defensive dropout achieves much larger variances of the gradients  ...  network model and the attacker's strategy for generating adversarial examples.We also investigate the mechanism behind the outstanding defense effects achieved by the proposed defensive dropout.  ...  We thank researchers at the US Naval Research Laboratory for their comments on previous drafts of this paper.  ... 
doi:10.1145/3240765.3264699 dblp:conf/iccad/WangWZWKCL18 fatcat:r3lo7gelcrflbm6qq36jhuzegy

Block Switching: A Stochastic Approach for Deep Learning Security [article]

Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin, Peter Chin
2020 arXiv   pre-print
We show empirically that BS leads to a more dispersed input gradient distribution and superior defense effectiveness compared with other stochastic defenses such as stochastic activation pruning (SAP).  ...  In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on stochasticity.  ...  As comparison, another recent stochastic defense stochastic activation pruning (SAP) only reduces the fooling ratio to 32.1% and 93.3% given the same a ack. e fooling ratio can be further deceased with  ... 
arXiv:2002.07920v1 fatcat:qtvctvhoq5eatodyukefgt3yvu

Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning [article]

Guneet S. Dhillon, Nicholas Carlini
2020 arXiv   pre-print
Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a defense to adversarial examples that was attacked and found to be broken by the "Obfuscated Gradients" paper (Athalye et al., 2018).  ...  Introduction Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) is a proposed defense to adversarial examples.  ...  Background We assume familiarity with neural networks, methods to generate adversarial examples, Stochastic Activation Pruning, and the Backwards Pass Differentiable Approximation. Notation.  ... 
arXiv:2010.00071v1 fatcat:q6lw5mu2wfawbi2gpedyaf6owy

Game Theory for Adversarial Attacks and Defenses [article]

Shorya Sharma
2022 arXiv   pre-print
We use two randomization methods, random initialization and stochastic activation pruning, to create diversity of networks.  ...  Hence, some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.  ...  Activation Pruning [16] (SAP)-like networks to increase diversities among networks by stochastic pruning activation and to minimize computation consumption by sharing the same weights and bias with  ... 
arXiv:2110.06166v3 fatcat:547yungdhvd3tpmxwbib47mnve

Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial Pruning [article]

Jonah O'Brien Weiss, Tiago Alves, Sandip Kundu
2022 arXiv   pre-print
In this work, we investigate the adversarial robustness of models produced by several irregular pruning schemes and by 8-bit quantization.  ...  However, since DNNs are vulnerable to adversarial inputs, it is important to consider the relationship between compression and adversarial robustness.  ...  Finally, there is one method proposed to prune one feature layer based on the difference between adversarial activations and clean activations, but without adversarial training [20] .  ... 
arXiv:2206.07406v1 fatcat:rj4nfmxlajhurarq3n2dizc4e4

Model Compression with Adversarial Robustness: A Unified Optimization Framework [article]

Shupeng Gui , Zhangyang Wang Texas A M University
2019 arXiv   pre-print
This paper studies model compression through a different lens: could we compress models without hurting their robustness to adversarial attacks, in addition to maintaining accuracy?  ...  Previous literature suggested that the goals of robustness and compactness might sometimes contradict. We propose a novel Adversarially Trained Model Compression (ATMC) framework.  ...  A few parallel efforts [38, 51] discussed activation pruning or quantization as defense ways.  ... 
arXiv:1902.03538v3 fatcat:qi2xwc7cnvb4rc63rs35pfg6ja

Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling

Ozan Özdenizci, Robert Legenstein
2021 International Conference on Machine Learning  
Hence, if adversarial robustness is an issue, training of sparsely connected networks necessitates considering adversarially robust sparse learning.  ...  by pruning.  ...  This work has been supported by the "University SAL Labs" initiative of Silicon Austria Labs (SAL) and its Austrian partner universities for applied fundamental research for electronic based systems.  ... 
dblp:conf/icml/OzdenizciL21 fatcat:y4zf3joapvelznhuftqy5euwl4

Adversarial Neural Pruning with Latent Vulnerability Suppression [article]

Divyam Madaan, Jinwoo Shin, Sung Ju Hwang
2020 arXiv   pre-print
We validate our Adversarial Neural Pruning with Vulnerability Suppression (ANP-VS) method on multiple benchmark datasets, on which it not only obtains state-of-the-art adversarial robustness but also improves  ...  Explicitly, we define vulnerability for each latent feature and then propose a new loss for adversarial learning, Vulnerability Suppression (VS) loss, that aims to minimize the feature-level vulnerability  ...  Acknowledgements We thank the anonymous reviewers for their insightful comments and suggestions. We are also grateful to the authors of Lee et al. (2018)  ... 
arXiv:1908.04355v4 fatcat:piwkckxbi5fxrnk75vylcytt64

Adversarial Robustness vs Model Compression, or Both? [article]

Shaokai Ye, Kaidi Xu, Sijia Liu, Jan-Henrik Lambrechts, Huan Zhang, Aojun Zhou, Kaisheng Ma, Yanzhi Wang, Xue Lin
2021 arXiv   pre-print
Furthermore, this work studies two hypotheses about weight pruning in the conventional setting and finds that weight pruning is essential for reducing the network model size in the adversarial setting,  ...  However, adversarial robustness requires a significantly larger capacity of the network than that for the natural training with only benign examples.  ...  Acknowledgments This work is partly supported by the National Science Foundation CNS-1932351, Institute for Interdisciplinary Information Core Technology (IIISCT) and Zhongguancun Haihua Institute for  ... 
arXiv:1903.12561v5 fatcat:vujngm6rh5ge7h5ebqh6l2j42a

Adversarial Neuron Pruning Purifies Backdoored Deep Models [article]

Dongxian Wu, Yisen Wang
2021 arXiv   pre-print
Based on these observations, we propose a novel model repairing method, termed Adversarial Neuron Pruning (ANP), which prunes some sensitive neurons to purify the injected backdoor.  ...  As deep neural networks (DNNs) are growing larger, their requirements for computational resources become huge, which makes outsourcing training more popular.  ...  For ANP, we optimize all masks using Stochastic Gradient Descent (SGD) with the perturbation budget = 0.4 and the trade-off coefficient α = 0.2.  ... 
arXiv:2110.14430v1 fatcat:7x2ni2zqenfqdkq3rkx25kmtgq

A Robust Deep-Neural-Network-Based Compressed Model for Mobile Device Assisted by Edge Server

Yushuang Yan, Qingqi Pei
2019 IEEE Access  
In model robustness, a defensive mechanism is proposed for enhancing the robustness of the compressed model against adversarial examples.  ...  Furthermore, the weight distribution of the compressed model is considered for improving the model's accuracy in the defense method.  ...  Specially, a defensive mechanism is proposed for enhancing the robustness of the compressed model against adversarial examples in model robustness.  ... 
doi:10.1109/access.2019.2958406 fatcat:inihgl2in5habp7zvbsmnu3mfi

Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness [article]

Tianlong Chen, Huan Zhang, Zhenyu Zhang, Shiyu Chang, Sijia Liu, Pin-Yu Chen, Zhangyang Wang
2022 arXiv   pre-print
We then optimize the associated slopes and intercepts of the replaced linear activations for restoring model performance while maintaining certifiability.  ...  robust training (i.e., over 30% improvements on CIFAR-10 models); and (3) scale up complete verification to large adversarially trained models with 17M parameters.  ...  The Superiority of Grafting for Verification In this section, we compare grafting with five pruning baseline methods: (i) Baseline without neuron pruning or grafting; (ii) Stochastic Activation Pruning  ... 
arXiv:2206.07839v1 fatcat:tnr6vnfburegtekbocfg4z22lu

Improving Adversarial Robustness via Channel-wise Activation Suppressing [article]

Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang
2022 arXiv   pre-print
The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).  ...  We show that CAS can train a model that inherently suppresses adversarial activation, and can be easily applied to existing defense methods to further improve their robustness.  ...  Stochastic Activation Pruning (SAP) (Dhillon et al., 2018) taked the randomness and the value of features into consideration.  ... 
arXiv:2103.08307v2 fatcat:lhfljy7pq5g7dm4nbtccbtzloi
« Previous Showing results 1 — 15 out of 580 results