Filters








8,744 Hits in 3.7 sec

Explaining Adversarial Examples by Local Properties of Convolutional Neural Networks

Hamed H. Aghdam, Elnaz J. Heravi, Domenec Puig
2017 Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications  
In this work, we analyze some of local properties of ConvNets that are directly related to their unreliability to adversarial examples. We shows that ConvNets are not locally isotropic and symmetric.  ...  Vulnerability of ConvNets to adversarial examples have been mainly studied by devising a solution for generating adversarial examples.  ...  Explaining Adversarial Examples by Local Properties of Convolutional Neural Networks  ... 
doi:10.5220/0006123702260234 dblp:conf/visapp/AghdamHP17a fatcat:m4zevghjjjc55n5er7vnqfiug4

Intriguing properties of neural networks [article]

Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus
2014 arXiv   pre-print
It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.  ...  Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks.  ...  Yet, we found that adversarial examples are relatively robust, and are shared by neural networks with varied number of layers, activations or trained on different subsets of the training data.  ... 
arXiv:1312.6199v4 fatcat:bzxlylep7vh33ouuowgltsm6nm

Verification and Repair of Neural Networks: A Progress Report on Convolutional Models [chapter]

Dario Guidotti, Francesco Leofante, Luca Pulina, Armando Tacchella
2019 Lecture Notes in Computer Science  
In this paper, we present our current efforts to tackle repair of deep convolutional neural networks using ideas borrowed from Transfer Learning.  ...  Using results obtained on popular MNIST and CIFAR10 datasets, we show that models of deep convolutional neural networks can be transformed into simpler ones preserving their accuracy, and we discuss how  ...  For a more detailed study on Convolutional Neural Networks we refer to [9] .  ... 
doi:10.1007/978-3-030-35166-3_29 fatcat:swspgwzllfb5fpenhgzq3k44ji

Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples [article]

Josue Ortega Caro, Yilong Ju, Ryan Pyle, Sourav Dey, Wieland Brendel, Fabio Anselmi, Ankit Patel
2021 arXiv   pre-print
Inspired by theoretical work on linear full-width convolutional models, we hypothesize that the local (i.e. bounded-width) convolutional operations commonly used in current neural networks are implicitly  ...  Adversarial Attacks are still a significant challenge for neural networks.  ...  This research was also supported by Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003. The U.S.  ... 
arXiv:2006.11440v4 fatcat:l5kykgnqlngyrgodgqlcxxwrne

Input Validation for Neural Networks via Runtime Local Robustness Verification [article]

Jiangchao Liu, Liqian Chen, Antoine Mine, Ji Wang
2020 arXiv   pre-print
Experiments show that our approach can protect neural networks from adversarial examples and improve their accuracies.  ...  We observe that the robustness radii of correctly classified inputs are much larger than that of misclassified inputs which include adversarial examples, especially those from strong adversarial attacks  ...  Local robustness properties ensure that, a neural network is immune to adversarial examples on a set of inputs within δ in L p norm distance.  ... 
arXiv:2002.03339v1 fatcat:hxogvcxshba7pax7s2tcigovcm

Exploring Adversarial Examples: Patterns of One-Pixel Attacks [article]

David Kügler, Alexander Distergoft, Arjan Kuijper, Anirban Mukhopadhyay
2018 arXiv   pre-print
We hypothesize that adversarial examples might result from the incorrect mapping of image space to the low dimensional generation manifold by deep networks.  ...  Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare.  ...  [4] try to get an insight on adversarial examples by training different neural networks on a synthetic dataset of two concentric spheres with different radii.  ... 
arXiv:1806.09410v1 fatcat:mtcfm2rtovh4lkbo6eawfjn674

On the Effect of Low-Rank Weights on Adversarial Robustness of Neural Networks [article]

Peter Langenberg, Emilio Rafael Balda, Arash Behboodi, Rudolf Mathar
2021 arXiv   pre-print
Recently, there has been an abundance of works on designing Deep Neural Networks (DNNs) that are robust to adversarial examples.  ...  The effect of nuclear norm regularization on adversarial robustness is paramount when it is applied to convolutional neural networks.  ...  In this work, we refer to Fully Connected Neural Networks (FCNNs) to the DNNs that only contain fully-connected layers, while Convolutional Neural Networks (CNNs) are DNNs containing convolutional filters  ... 
arXiv:1901.10371v2 fatcat:6wcma2jalvd2dnwzqaqg433qxm

Lower bounds on the robustness to adversarial perturbations

Jonathan Peck, Joris Roels, Bart Goossens, Yvan Saeys
2017 Neural Information Processing Systems  
The input-output mappings learned by state-of-the-art neural networks are significantly discontinuous.  ...  In this work, we take steps towards a formal characterization of adversarial perturbations by deriving lower bounds on the magnitudes of perturbations necessary to change the classification of neural networks  ...  This has led to the identification of several unexpected and counter-intuitive properties of neural networks.  ... 
dblp:conf/nips/PeckRGS17 fatcat:iplmz3bo7rfnbm7av62t227boe

Efficient Solutions of the CHES 2018 AES Challenge Using Deep Residual Neural Networks and Knowledge Distillation on Adversarial Examples [article]

Aron Gohr, Sven Jacob, Werner Schindler
2020 IACR Cryptology ePrint Archive  
be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network.  ...  Fourth, we study the properties of adversarially generated side-channel traces for our model.  ...  Acknowledgements We thank Friederike Laus for her careful reading of an earlier iteration of this document.  ... 
dblp:journals/iacr/GohrJS20 fatcat:soqwltw3infi7jz2as4ofvbji4

Improving the Robustness of Neural Networks Using K-Support Norm Based Adversarial Training

Sheikh Waqas Akhtar, Saad Rehman, Mahmood Akhtar, Muazzam A. Khan, Farhan Riaz, Qaiser Chaudry, Rupert Young
2016 IEEE Access  
INDEX TERMS K-Support norm, robutness, generalization, convolutional neural networks, adversarial. (2012 to date). Dr.  ...  It has been recently proposed that neural networks can be made robust against adversarial noise by training them using the data corrupted with adversarial noise itself.  ...  Why do neural networks misclassify adversarial examples?  ... 
doi:10.1109/access.2016.2643678 fatcat:45rkm6v4w5h35csj5owxp76yv4

Truth or Backpropaganda? An Empirical Investigation of Deep Learning Theory [article]

Micah Goldblum, Jonas Geiping, Avi Schwarzschild, Michael Moeller, Tom Goldstein
2020 arXiv   pre-print
In this work, we: (1) prove the widespread existence of suboptimal local minima in the loss landscape of neural networks, and we use our theory to find examples; (2) show that small-norm parameters are  ...  We empirically evaluate common assumptions about neural networks that are widely held by practitioners and theorists alike.  ...  Additional funding was provided by the Sloan Foundation.  ... 
arXiv:1910.00359v3 fatcat:oas2iunoyfantiepiklcz5pude

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, Martin Vechev
2018 2018 IEEE Symposium on Security and Privacy (SP)  
Based on overapproximation, AI 2 can automatically prove safety properties (e.g., robustness) of realistic neural networks (e.g., convolutional neural networks).  ...  Concretely, we introduce abstract transformers that capture the behavior of fully connected and convolutional neural network layers with rectified linear unit activations (ReLU), as well as max pooling  ...  We ran AI 2 to check whether each neural network satisfies the robustness properties for the respective dataset. We compared the results using different abstract domains,  ... 
doi:10.1109/sp.2018.00058 dblp:conf/sp/GehrMDTCV18 fatcat:6o35qb3wurhadnbaylclnv7hsq

What's the relationship between CNNs and communication systems? [article]

Hao Ge, Xiaoguang Tu, Yanxiang Gong, Mei Xie, Zheng Ma
2020 arXiv   pre-print
The interpretability of Convolutional Neural Networks (CNNs) is an important topic in the field of computer vision.  ...  Finally, through the analysis of some cutting-edge research on neural networks, we find the inherent relation between these two tasks can be of help in explaining these researches reasonably, as well as  ...  Introduction Convolutional neural networks(CNNs) have demonstrated their superiority in many fields.  ... 
arXiv:2003.01413v1 fatcat:vq4jdfizuzb5vm4nwexyjcqooa

Shift Invariance Can Reduce Adversarial Robustness [article]

Songwei Ge, Vasu Singla, Ronen Basri, David Jacobs
2021 arXiv   pre-print
Using this, we prove that shift invariance in neural networks produces adversarial examples for the simple case of two classes, each consisting of a single image with a black or white dot on a gray background  ...  Shift invariance is a critical property of CNNs that improves performance on classification.  ...  Acknowledgments and Disclosure of Funding The authors thank the U.S.-Israel Binational Science Foundation, grant number 2018680, the National Science Foundation, grant no.  ... 
arXiv:2103.02695v3 fatcat:ank4dhxnl5ctdi67ldkufjov5q

On the Structural Sensitivity of Deep Convolutional Networks to the Directions of Fourier Basis Functions [article]

Yusuke Tsuzuku, Issei Sato
2019 arXiv   pre-print
We derived the property by specializing a hypothesis of the cause of the sensitivity, known as the linearity of neural networks, to convolutional networks and empirically validated it.  ...  As a by-product of the analysis, we propose an algorithm to create shift-invariant universal adversarial perturbations available in black-box settings.  ...  Acknowledgement YT was supported by Toyota/Dwango AI scholarship. IS was supported by KAKENHI 17H04693.  ... 
arXiv:1809.04098v2 fatcat:5ybkuvmb65cn3czharbdbxmnlu
« Previous Showing results 1 — 15 out of 8,744 results