Filters








216 Hits in 6.9 sec

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images [article]

Anh Nguyen, Jason Yosinski, Jeff Clune
2015 arXiv   pre-print
It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call "fooling images" (more generally, fooling examples).  ...  with high confidence as belonging to each dataset class.  ...  Acknowledgments The authors would like to thank Hod Lipson for helpful discussions and the NASA Space Technology Research Fellowship (JY) for funding.  ... 
arXiv:1412.1897v4 fatcat:6i3wpiohtrbs5dtc2joyrciyta

Deep neural networks are easily fooled: High confidence predictions for unrecognizable images

Anh Nguyen, Jason Yosinski, Jeff Clune
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
It is possible to produce images totally unrecognizable to human eyes that DNNs believe with near certainty are familiar objects, which we call "fooling images" (more generally, fooling examples).  ...  with high confidence as belonging to each dataset class.  ...  Acknowledgments The authors would like to thank Hod Lipson for helpful discussions and the NASA Space Technology Research Fellowship (JY) for funding.  ... 
doi:10.1109/cvpr.2015.7298640 dblp:conf/cvpr/NguyenYC15 fatcat:wconnpqkgvddlp5xzf6ise6yq4

Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [article]

Alex Wong, Mukund Mundhra, Stefano Soatto
2021 arXiv   pre-print
We study the effect of adversarial perturbations of images on the estimates of disparity by deep learning models trained for stereo.  ...  These perturbations not only affect the specific model they are crafted for, but transfer to models with different architecture, trained with different loss functions.  ...  in high confidence predictions.  ... 
arXiv:2009.10142v3 fatcat:q4dpuxw55ndhhnmsc3hekqda6a

Capture the Bot: Using Adversarial Examples to Improve CAPTCHA Robustness to Bot Attacks [article]

Dorjan Hitaj, Briland Hitaj, Sushil Jajodia, Luigi V. Mancini
2020 arXiv   pre-print
However, recent work in the literature has provided evidence of sophisticated bots that make use of advancements in machine learning (ML) to easily bypass existing CAPTCHA-based defenses.  ...  To this date, CAPTCHAs have served as the first line of defense preventing unauthorized access by (malicious) bots to web-based services, while at the same time maintaining a trouble-free experience for  ...  EAs with indirect encoding : By using Compositional Pattern Producing Network (CPPN) [11] encodings, it is possible to generate unrecognizable images that fool many neural networks with high confidence  ... 
arXiv:2010.16204v2 fatcat:cukulgj5ivbijlfuixwkjvwlma

Adversarial Robustness: Softmax versus Openmax [article]

Andras Rozsa, Manuel Günther, Terrance E. Boult
2017 arXiv   pre-print
Deep neural networks (DNNs) provide state-of-the-art results on various tasks and are widely used in real world applications.  ...  Various approaches have been developed for efficiently generating these so-called adversarial examples, but those mostly rely on ascending the gradient of loss.  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.  ... 
arXiv:1708.01697v1 fatcat:hfcopitdlrebvltmzwfe3xp2dy

Deep Learning Vector Quantization

Harm de Vries, Roland Memisevic, Aaron C. Courville
2016 The European Symposium on Artificial Neural Networks  
While deep neural nets (DNN's) achieve impressive performance on image recognition tasks, previous studies have reported that DNN's give high confidence predictions for unrecognizable images.  ...  Motivated by the observation that such fooling examples might be caused by the extrapolating nature of the log-softmax, we propose to combine neural networks with Learning Vector Quantization (LVQ).  ...  Follow-up research [5] showed that DNN's are also easily fooled -images for which a DNN assigns high confidence while not coming from the data distribution.  ... 
dblp:conf/esann/VriesMC16 fatcat:m5ugpcfl3bb4dobajs3sa65noe

Incorporating Hidden Layer representation into Adversarial Attacks and Defences [article]

Haojing Shen, Sihong Chen, Ran Wang, Xizhao Wang
2022 arXiv   pre-print
The experiments show that our defence method can significantly improve the adversarial robustness of deep neural networks which achieves the state-of-the-art performance even though we do not adopt adversarial  ...  And this defence strategy can be regarded as an activation function which can be applied to any kind of neural network.  ...  However, it can fool the DNNs. Anh et al. [27] find that some unrecognizable for humans can be classified as a class with high confidence by DNNs.  ... 
arXiv:2011.14045v2 fatcat:hszluxbgjfbxhnwzdy3izez2di

AI-Powered GUI Attack and Its Defensive Methods [article]

Ning Yu, Zachary Tuttle, Carl Jake Thurnau, Emmanuel Mireku
2020 arXiv   pre-print
It is twofold: (1) A malware is designed to attack the existing GUI system by using AI-based object recognition techniques. (2) Its defensive methods are discovered by generating adversarial examples and  ...  The results have shown that a generic GUI attack can be implemented and performed in a simple way based on current AI techniques and its countermeasures are temporary but effective to mitigate the threats  ...  For example, we used a lambda value of 500 to run this image through the classifier, the chrome icon was unrecognizable to the neural network.  ... 
arXiv:2001.09388v1 fatcat:snhfegheizbaff47kxcuaro2bu

Towards Robust Classification with Image Quality Assessment [article]

Yeli Feng, Yiyu Cai
2020 arXiv   pre-print
Recent studies have shown that deep convolutional neural networks (DCNN) are vulnerable to adversarial examples and sensitive to perceptual quality as well as the acquisition condition of images.  ...  These findings raise a big concern for the adoption of DCNN-based applications for critical tasks.  ...  Using evolutionary algorithms, [20] shows that DCNN can be easily fooled into making highly confident predictions on images that are unrecognizable to human eyes.  ... 
arXiv:2004.06288v1 fatcat:ehtc4qyv5vgefpw7okunkzabni

Adversarial Examples: Attacks and Defenses for Deep Learning

Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
2019 IEEE Transactions on Neural Networks and Learning Systems  
Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage.  ...  However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples.  ...  In an image classification task, a false positive can be an adversarial image unrecognizable to human, while deep neural networks predict it to a class with a high confidence score.  ... 
doi:10.1109/tnnls.2018.2886017 pmid:30640631 fatcat:enznysw3svfzdjrmubwkedr6me

Adversarial Examples: Attacks and Defenses for Deep Learning [article]

Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
2018 arXiv   pre-print
Adversarial examples are imperceptible to human but can easily fool deep neural networks in the testing/deploying stage.  ...  The vulnerability to adversarial examples becomes one of the major risks for applying deep neural networks in safety-critical environments.  ...  ACKNOWLEDGMENT The work presented is supported in part by National Science Foundation (grants ACI 1245880, ACI 1229576, CCF-1128805, CNS-1624782), and Florida Center for Cybersecurity seed grant.  ... 
arXiv:1712.07107v3 fatcat:5wcz4h4eijdsdjeqwdpzbfbjeu

Bidirectional Learning for Robust Neural Networks

Sidney Pontes-Filho, Marcus Liwicki
2019 2019 International Joint Conference on Neural Networks (IJCNN)  
It consists of training an undirected neural network to map input to output and vice-versa; therefore it can produce a classifier in one direction, and a generator in the opposite direction for the same  ...  In this paper, two novel learning techniques are introduced which use BL for improving robustness to white noise static and adversarial examples.  ...  ACKNOWLEDGMENTS We are grateful for the support of Stefano Nichele in the review process and for the suggestions of all reviewers. We thank Benjamin Grewe for helpful discussions.  ... 
doi:10.1109/ijcnn.2019.8852120 dblp:conf/ijcnn/Pontes-FilhoL19 fatcat:ehkk2szojvan7igryf4thuyuhm

Bidirectional Learning for Robust Neural Networks [article]

Sidney Pontes-Filho, Marcus Liwicki
2018 arXiv   pre-print
It consists of training an undirected neural network to map input to output and vice-versa; therefore it can produce a classifier in one direction, and a generator in the opposite direction for the same  ...  In this paper, two novel learning techniques are introduced which use BL for improving robustness to white noise static and adversarial examples.  ...  ACKNOWLEDGMENTS We are grateful for the support of Stefano Nichele in the review process and for the suggestions of all reviewers. We thank Benjamin Grewe for helpful discussions.  ... 
arXiv:1805.08006v2 fatcat:pdyxf5c7h5hane5gkuazsyob44

Denoising Autoencoders for Overgeneralization in Neural Networks [article]

Giacomo Spigler
2019 arXiv   pre-print
with a high degree of confidence.  ...  Despite the recent developments that allowed neural networks to achieve impressive performance on a variety of applications, these models are intrinsically affected by the problem of overgeneralization  ...  A similar problem is 'fooling', whereby it is possible to generate images that are unrecognizable to humans but are nonetheless classified as one of the known classes with high confidence, for example  ... 
arXiv:1709.04762v3 fatcat:y7vtg3nryvb2zhtaj6tihgq4k4

Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics [article]

Xin Li, Fuxin Li
2017 arXiv   pre-print
Those findings should lead to more insights about the classification mechanisms in deep convolutional neural networks.  ...  Instead of directly training a deep neural network to detect adversarials, a much simpler approach was proposed based on statistics on outputs from convolutional layers.  ...  Deep Convolutional Neural Networks A deep convolutional neural network consists of many convolutional layers which are connected to spatially/temporally adjacent nodes in the next layer: Z m+1 = [T (W  ... 
arXiv:1612.07767v2 fatcat:shj6kot57vchzfkxbmebxztu5m
« Previous Showing results 1 — 15 out of 216 results