Filters








1,664 Hits in 5.3 sec

Improved Gradient based Adversarial Attacks for Quantized Networks [article]

Kartik Gupta, Thalaiyasingam Ajanthan
2021 arXiv   pre-print
In this work, we systematically study the robustness of quantized networks against gradient based adversarial attacks and demonstrate that these quantized models suffer from gradient vanishing issues and  ...  Despite being a simple modification to existing gradient based adversarial attacks, experiments on multiple image classification datasets with multiple network architectures demonstrate that our temperature  ...  We evaluated our improved gradient based adversarial attacks on CIFAR-10/100 datasets with VGG-16 and ResNet-18 networks quantized using multiple recent techniques [1, 2, 4, 16] .  ... 
arXiv:2003.13511v2 fatcat:7m5afvgqujbfnhiifzvx6vb57q

Improved Gradient based Adversarial Attacks for Quantized Networks [article]

Kartik Gupta, Thalaiyasingam Ajanthan
2020
In this work, we systematically study the robustness of quantized networks against gradient based adversarial attacks and demonstrate that these quantized models suffer from gradient vanishing issues and  ...  Despite being a simple modification to existing gradient based adversarial attacks, experiments on multiple image classification datasets with multiple network architectures demonstrate that our temperature  ...  We acknowledge the DATA61, CSIRO for their support and thank Puneet Dokania for useful discussions.  ... 
doi:10.48550/arxiv.2003.13511 fatcat:nkc3ttc27zam3jty4ccjakqv2m

QuSecNets: Quantization-based Defense Mechanism for Securing Deep Neural Network against Adversarial Attacks [article]

Hassan Ali, Hammad Tariq, Muhammad Abdullah Hanif, Faiq Khalid, Semeen Rehman, Rehan Ahmed, Muhammad Shafique
2018 arXiv   pre-print
We apply our techniques on the Convolutional Neural Networks (CNNs, a particular type of DNN which is heavily used in vision-based applications) against adversarial attacks from the open-source Cleverhans  ...  Our experimental results show 1%-5% increase in the adversarial accuracy for MNIST and 0%-2.4% increase in the adversarial accuracy for CIFAR10.  ...  Another approach is to either mask the gradient or the whole DNN, e.g., Defensive Distillation masks the gradient of the network, but it is only valid for gradient-based attacks and it can be compromised  ... 
arXiv:1811.01437v1 fatcat:co24qxvzsjbovhwrgupyiv5zvi

Defensive Quantization: When Efficiency Meets Robustness [article]

Ji Lin, Chuang Gan, Song Han
2019 arXiv   pre-print
As a by-product, DQ can also improve the accuracy of quantized models without adversarial attack.  ...  However, we observe that the conventional quantization approaches are vulnerable to adversarial attacks.  ...  ACKNOWLEDGMENTS We thank the support from MIT Quest for Intelligence, MIT-IBM Watson AI Lab, MIT-SenseTime Alliance, Xilinx, Samsung and AWS Machine Learning Research Awards.  ... 
arXiv:1904.08444v1 fatcat:s7fkaj47gjg6fbd5ddm5nswvry

Defend Deep Neural Networks Against Adversarial Examples via Fixed and Dynamic Quantized Activation Functions [article]

Adnan Siraj Rakin, Jinfeng Yi, Boqing Gong, Deliang Fan
2019 arXiv   pre-print
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.  ...  We also propose to train robust neural networks by using adaptive quantization techniques for the activation functions.  ...  PGD based adversarial training is used as a base defense method that achieves about 94% accuracy under PGD attack and 97.1% accuracy under FGSM attack.  ... 
arXiv:1807.06714v2 fatcat:4kv3wl5rmvg3dccff6ly2pnvz4

Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes

Sravanti Addepalli, Vivek B.S., Arya Baburaj, Gaurang Sriramanan, R. Venkatesh Babu
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly when compared to a normally  ...  Present stateof-the-art defenses against adversarial attacks require the networks to be explicitly trained using adversarial samples that are computationally expensive to generate.  ...  We would like to extend our gratitude to all the reviewers for their valuable suggestions.  ... 
doi:10.1109/cvpr42600.2020.00110 dblp:conf/cvpr/AddepalliSBSB20 fatcat:soveoke7sngghhdovibqhtgbx4

Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only [article]

Tianhang Zheng, Changyou Chen, Kui Ren
2018 arXiv   pre-print
Recent work on adversarial attack and defense suggests that PGD is a universal l_∞ first-order attack, and PGD adversarial training can significantly improve network robustness against a wide range of  ...  For the l_1-norm loss, we propose a computationally-feasible solution by embedding a differentiable soft-quantization layer after the network input layer.  ...  The network robustness can also be normally evaluated by gradient-based attack algorithms due to the differentiability. We call such an extended network soft-quantization network.  ... 
arXiv:1810.05665v2 fatcat:kt66vwh3kjdexlneyljupzkmue

TREND: Transferability based Robust ENsemble Design [article]

Deepak Ravikumar, Sangamesh Kodge, Isha Garg, Kaushik Roy
2021 arXiv   pre-print
We observe that transferability is architecture-dependent for both weight and activation quantized models.  ...  In this work, we study the effect of network architecture, initialization, optimizer, input, weight and activation quantization on transferability of adversarial samples.  ...  Such query based attacks, often labeled as black-box attacks, are successful because of the transferability of adversarial perturbations: attacks crafted to fool one network often fool another network  ... 
arXiv:2008.01524v2 fatcat:p762jkrncff33opjpg73bcce2a

Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes [article]

Sravanti Addepalli, Vivek B.S., Arya Baburaj, Gaurang Sriramanan, R. Venkatesh Babu
2020 arXiv   pre-print
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly when compared to a normally  ...  Present state-of-the-art defenses against adversarial attacks require the networks to be explicitly trained using adversarial samples that are computationally expensive to generate.  ...  Crucially, PGD trained models are robust against various gradient-based iterative attacks, as well as several variants of non-gradient based attacks.  ... 
arXiv:2004.00306v1 fatcat:n4zbs4ovjfaphipucacdun67ri

Adversarial Dual Network Learning with Randomized Image Transform for Restoring Attacked Images

Jianhe Yuan, Zhihai He
2020 IEEE Access  
We develop a new method for defending deep neural networks against attacks using adversarial dual network learning with randomized nonlinear image transform.  ...  INDEX TERMS Adversarial attack, adversarial defense, deep neural network.  ...  During the past few years, a number of methods have been developed to construct adversarial samples to attack the deep neural networks, including fast gradient sign (FGS) method [2] , Jacobian-based saliency  ... 
doi:10.1109/access.2020.2969288 fatcat:zmvphje7mracvkxr63ggggnwpa

QUANOS- Adversarial Noise Sensitivity Driven Hybrid Quantization of Neural Networks [article]

Priyadarshini Panda
2020 arXiv   pre-print
In this paper, we investigate the use of quantization to potentially resist adversarial attacks.  ...  Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial attacks, wherein, a model gets fooled by applying slight perturbations on the input.  ...  This implies that deterring the flow of adversarial gradients can result in improved robustness.  ... 
arXiv:2004.11233v2 fatcat:dpaio4flvrds3m6ga6ghhog54e

Noise Sensitivity-Based Energy Efficient and Robust Adversary Detection in Neural Networks [article]

Rachel Sterneck, Abhishek Moitra, Priyadarshini Panda
2021 arXiv   pre-print
We use Adversarial Noise Sensitivity (ANS), a novel metric for measuring the adversarial gradient contribution of different intermediate layers of a network.  ...  We also demonstrate the effects of quantization on our detector-appended networks.  ...  We propose ANS-based detectors as a robust and efficient method for preventing adversarial attacks, and we encourage future research to continue analyzing adversarial examples from a structural perspective  ... 
arXiv:2101.01543v1 fatcat:k7uy7tr6grfcrp3ua4nlf54coa

Error-Silenced Quantization: Bridging Robustness and Compactness

Zhicong Tang, Yinpeng Dong, Hang Su
2020 International Joint Conference on Artificial Intelligence  
As deep neural networks (DNNs) advance rapidly, quantization has become a widely used standard for deployments on resource-limited hardware.  ...  However, DNNs are well accepted vulnerable to adversarial attacks, and quantization is found to further weaken the robustness.  ...  To thoroughly examine whether our method is truly secure, we test it with L 2 bounded Boundary attack [Brendel et al., 2018] and N Attack for decision-based and score-based black-box attacks, respectively  ... 
dblp:conf/ijcai/TangD020 fatcat:eg4wydgqgbgo7kifdewwlfzyd4

Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks [article]

Rémi Bernhard, Pierre-Alain Moellic, Jean-Max Dutertre
2020 arXiv   pre-print
In this article, we investigate the adversarial robustness of quantized neural networks under different threat models for a classical supervised image classification task.  ...  an ensemble-based defense.  ...  Therefore, fully-binarized neural networks do not bring much robustness improvement compared to full-precison models against gradient-based attacks, as claimed in [27] .  ... 
arXiv:1909.12741v2 fatcat:5aatytfrwzdvzbnbdzab4au5h4

A Layer-wise Adversarial-aware Quantization Optimization for Improving Robustness [article]

Chang Song, Riya Ranjan, Hai Li
2021 arXiv   pre-print
On the other hand, recent research has found that neural networks are vulnerable to adversarial attacks, and the robustness of a neural network model can only be improved with defense methods, such as  ...  to choose the best quantization parameter settings for a neural network.  ...  For defending adversarial attacks using quantization, one recent work [7] shows that quantization can defend adversarial attacks through gradient masking, though the effectiveness is limited.  ... 
arXiv:2110.12308v1 fatcat:kemcjhauo5c57iyfrwqg42w35y
« Previous Showing results 1 — 15 out of 1,664 results