119 Hits in 5.9 sec

Parseval Networks: Improving Robustness to Adversarial Examples [article]

Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, Nicolas Usunier
2017 arXiv   pre-print
Parseval networks are empirically and theoretically motivated by an analysis of the robustness of the predictions made by deep neural networks when their input is subject to an adversarial perturbation  ...  We show that Parseval networks match the state-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House Numbers (SVHN) while being more robust than their vanilla counterpart against adversarial  ...  results presented in the table validate our most important claim: Parseval networks significantly improve the robustness of vanilla models to adversarial examples.  ... 
arXiv:1704.08847v2 fatcat:mx2o7ppzcrgmphfrdyjtlpgzwm

Laplacian networks: bounding indicator function smoothness for neural networks robustness

Carlos Lassance, Vincent Gripon, Antonio Ortega
2021 APSIPA Transactions on Signal and Information Processing  
We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations.  ...  We also show it can be combined with existing methods to increase overall robustness.  ...  image). 3) Adversarial robustness We next evaluate robustness to adversarial inputs, which are specifically built to fool the network function.  ... 
doi:10.1017/atsip.2021.2 fatcat:4zhdyr3v3vb6rpmnbrnlfetipe

Laplacian Networks: Bounding Indicator Function Smoothness for Neural Network Robustness [article]

Carlos Eduardo Rosar Kos Lassance, Vincent Gripon, Antonio Ortega
2018 arXiv   pre-print
For the past few years, Deep Neural Network (DNN) robustness has become a question of paramount importance.  ...  We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness of DNNs on classical supervised learning vision datasets.  ...  Adversarial Robustness We next evaluate robustness to adversarial inputs, which are specifically built to fool the network function.  ... 
arXiv:1805.10133v2 fatcat:ulm2dhazi5batoctptis5jgxuy

Improve Generalization and Robustness of Neural Networks via Weight Scale Shifting Invariant Regularizations [article]

Ziquan Liu, Yufei Cui, Antoni B. Chan
2022 arXiv   pre-print
The derived regularizer is an upper bound for the input gradient of the network so minimizing the improved regularizer also benefits the adversarial robustness.  ...  We demonstrate the efficacy of our proposed regularizer on various datasets and neural network architectures at improving generalization and adversarial robustness.  ...  Our experiment shows that WEISSI is more effective at improving the adversarial robustness than Parseval network.  ... 
arXiv:2008.02965v2 fatcat:orvnrau5dbdnzf2ekx47dn6fnu

L2-Nonexpansive Neural Networks [article]

Haifeng Qian, Mark N. Wegman
2019 arXiv   pre-print
Without needing any adversarial training, the proposed classifiers exceed the state of the art in robustness against white-box L2-bounded adversarial attacks.  ...  We develop the known methodology of controlling Lipschitz constants to realize its full potential in maximizing robustness, with a new regularization scheme for linear layers, new ways to adapt nonlinearities  ...  The reported robustness results of Cisse et al. (2017) , however, are much weaker than those by adversarial training in Madry et al. (2017) . We differ from Parseval networks in a number of ways.  ... 
arXiv:1802.07896v4 fatcat:66w5pbrxifbabovqyr7x2ykufy

Improved Network Robustness with Adversary Critic [article]

Alexander Matyasko, Lap-Pui Chau
2018 arXiv   pre-print
To improve the stability of the adversarial mapping, we introduce adversarial cycle-consistency constraint which ensures that the adversarial mapping of the adversarial examples is close to the original  ...  Our method surpasses in terms of robustness networks trained with adversarial training.  ...  Adversarial training [7, 8, 9] is the most popular approach to improve network robustness. Adversarial examples are generated online using the latest snapshot of the network parameters.  ... 
arXiv:1810.12576v1 fatcat:sj77lhqxa5glxfqmuddg35qere

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks [article]

Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama
2018 arXiv   pre-print
To take a steady step towards robust classifiers, we aim to create neural network models provably defended from perturbations.  ...  In experimental evaluations, our method showed its ability to provide a non-trivial guarantee and enhance robustness for even large networks.  ...  G.2 Improvements from Parseval networks Here, we discuss the difference between our work and Cisse et al. [8] .  ... 
arXiv:1802.04034v3 fatcat:xja2kpu3pnbt3jtqljpl2yn6ze

Gabor Layers Enhance Network Robustness [article]

Juan C. Pérez, Motasem Alfarra, Guillaume Jeanneret, Adel Bibi, Ali Thabet, Bernard Ghanem, Pablo Arbeláez
2020 arXiv   pre-print
training to further enhance network robustness.  ...  Furthermore, we experimentally show how our regularizer provides consistent robustness improvements.  ...  For instance, we improve adversarial robustness on certain networks by almost 18% with ∞ bounded noise of 8 /255.  ... 
arXiv:1912.05661v2 fatcat:5lcxkaj44rhj7gbh4exlrntpyq

Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks [article]

Hanxun Huang, Yisen Wang, Sarah Monazam Erfani, Quanquan Gu, James Bailey, Xingjun Ma
2022 arXiv   pre-print
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.  ...  of blocks) of the network can actually improve adversarial robustness; and 3) under the same parameter budget, there exists an optimal architectural configuration for adversarial robustness.  ...  The adversarial robustness of the 125 networks (adversarially trained using SAT) against PGD 20 test adversarial examples is plotted in Figure 1b .  ... 
arXiv:2110.03825v5 fatcat:k3qzldlz6vetbp7bbg4hvermai

Retrieval-Augmented Convolutional Neural Networks Against Adversarial Examples

Jake Junbo Zhao, Kyunghyun Cho
2019 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Our evaluation of the proposed approach against seven readilyavailable adversarial attacks on three datasets-CIFAR-10, SVHN and ImageNet-demonstrate the improved robustness compared to a vanilla convolutional  ...  The proposed hybrid architecture combining a convolutional network and an off-the-shelf retrieval engine was designed to mitigate the adverse effect of off-manifold adversarial examples, while the proposed  ...  This result suggests that it is necessary to tackle both types of adversarial examples to improve the robustness of a deep neural network based classifier to adversarial examples.  ... 
doi:10.1109/cvpr.2019.01183 dblp:conf/cvpr/ZhaoC19 fatcat:h4noqakhy5davmlxvypisip54m

Improving Network Robustness against Adversarial Attacks with Compact Convolution [article]

Rajeev Ranjan, Swami Sankaranarayanan, Carlos D. Castillo, Rama Chellappa
2018 arXiv   pre-print
In particular, we show that learning features in a closed and bounded space improves the robustness of the network.  ...  These attacks add a small perturbation to the input image that causes the network to misclassify the sample. In this paper, we focus on neutralizing adversarial attacks by compact feature learning.  ...  [22] proposed Parseval Networks that improves robustness by enforcing the weight matrices of convolutional and linear layers to be Parseval tight frames. Papernot et al.  ... 
arXiv:1712.00699v2 fatcat:mk2e4tie6rcnxnjjnnt5yglrsm

Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks [article]

Sekitoshi Kanai, Yasutoshi Ida, Yasuhiro Fujiwara, Masanori Yamada, Shuichi Adachi
2019 arXiv   pre-print
We propose Absum, which is a regularization method for improving adversarial robustness of convolutional neural networks (CNNs).  ...  We also reveal that Absum can improve robustness against gradient-based attacks (projected gradient descent) when used with adversarial training.  ...  , and Parseval networks constrain the induced norm of linear layers to improve robustness (Cisse et al. 2017) .  ... 
arXiv:1909.08830v1 fatcat:bvc2odu7fvhp7kb7ivftgeve44

Noise Optimization for Artificial Neural Networks [article]

Li Xiao, Zeliang Zhang, Yijie Peng
2021 arXiv   pre-print
Adding noises to artificial neural network(ANN) has been shown to be able to improve robustness in previous work.  ...  In numerical experiments, our proposed method can achieve significant performance improvement on robustness of several popular ANN structures under both black box and white box attacks tested in various  ...  Previous work focusing on improving robustness under adversarial attacks includes feature squeezing (Xu, Evans, and Qi, 2017) , distillation network (Papernot et al., 2016) , input transformation (e.g  ... 
arXiv:2102.04450v1 fatcat:opijf3tjjvdndpe6lojxkfl5vi

Retrieval-Augmented Convolutional Neural Networks for Improved Robustness against Adversarial Examples [article]

Jake Zhao, Kyunghyun Cho
2018 arXiv   pre-print
Our evaluation of the proposed approach against five readily-available adversarial attacks on three datasets--CIFAR-10, SVHN and ImageNet--demonstrate the improved robustness compared to the vanilla convolutional  ...  The proposed hybrid architecture combining a convolutional network and an off-the-shelf retrieval engine was designed to mitigate the adverse effect of off-manifold adversarial examples, while the proposed  ...  The proposed RaCNN is more robust against all the adversarial attacks compared to the vanilla convolutional network.  ... 
arXiv:1802.09502v1 fatcat:ukgmpcgahracxlosp45zffhtru

Deep Defense: Training DNNs with Improved Adversarial Robustness [article]

Ziang Yan, Yiwen Guo, Changshui Zhang
2018 arXiv   pre-print
., adversarial examples) to fool well-trained DNN classifiers into making arbitrary predictions. To address this problem, we propose a training recipe named "deep defense".  ...  Despite the efficacy on a variety of computer vision tasks, deep neural networks (DNNs) are vulnerable to adversarial attacks, limiting their applications in security-critical systems.  ...  Parseval training also yields models with improved robustness to the FGS attack, but they are still vulnerable to the DeepFool.  ... 
arXiv:1803.00404v3 fatcat:kqo5qp5zkfdrhmenqitkjkljv4
« Previous Showing results 1 — 15 out of 119 results