Filters








4,762 Hits in 3.0 sec

Deep Neural Rejection against Adversarial Examples [article]

Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli
2020 arXiv   pre-print
In this work, we propose a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers  ...  Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed  ...  ATTACKING DEEP NEURAL REJECTION To properly evaluate security, or adversarial robustness, of rejection-based defenses against adaptive white-box adversarial examples, we propose the following.  ... 
arXiv:1910.00470v3 fatcat:dzpgmactbvhhlphvifqpnxsqbu

Deep neural rejection against adversarial examples

Angelo Sotgiu, Ambra Demontis, Marco Melis, Battista Biggio, Giorgio Fumera, Xiaoyi Feng, Fabio Roli
2020 EURASIP Journal on Information Security  
In this work, we propose a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers  ...  Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed  ...  Attacking deep neural rejection To properly evaluate security, or adversarial robustness, of rejection-based defenses against adaptive white-box adversarial examples, we propose the following.  ... 
doi:10.1186/s13635-020-00105-y fatcat:grtxarwetnesbkvd57kuncypou

A Survey on Adversarial Examples in Deep Learning

Kai Chen, Haoqi Zhu, Leiming Yan, Jinwei Wang
2020 Journal on Big Data  
Adversarial examples are hot topics in the field of security in deep learning.  ...  This article explains the key technologies and theories of adversarial examples from the concept of adversarial examples, the occurrences of the adversarial examples, the attacking methods of adversarial  ...  Papernot proposed a distillation defense mechanism against adversarial examples for deep neural networks, and verified the effectiveness of their defense mechanisms on two types of deep neural networks  ... 
doi:10.32604/jbd.2020.012294 fatcat:2o5bsbqyurfj7igqyqbs47fjre

Hack The Box: Fooling Deep Learning Abstraction-Based Monitors [article]

Sara Hajj Ibrahim, Mohamed Nassar
2021 arXiv   pre-print
In this paper, we consider the case study of abstraction-based novelty detection and show that it is not robust against adversarial samples.  ...  Moreover, we show the feasibility of crafting adversarial samples that fool the deep learning classifier and bypass the novelty detection monitoring at the same time.  ...  Adversarial attacks against neural networks Another idea is to use known adversarial attacks against neural networks as a starting point for attacking the monitor.  ... 
arXiv:2107.04764v3 fatcat:sgojuv5bhja6zfjmarl5hprdf4

Detecting Black-box Adversarial Examples through Nonlinear Dimensionality Reduction

Francesco Crecchi, Davide Bacciu, Battista Biggio
2019 The European Symposium on Artificial Neural Networks  
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification.  ...  Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method.  ...  Introduction Deep neural networks (DNNs) reach state-of-the-art performances in a wide variety of pattern recognition tasks.  ... 
dblp:conf/esann/CrecchiBB19 fatcat:vvzw2avibrcnvh2pemcsrzhqhe

Integration of statistical detector and Gaussian noise injection detector for adversarial example detection in deep neural networks

Weiqi Fan, Guangling Sun, Yuying Su, Zhi Liu, Xiaofeng Lu
2019 Multimedia tools and applications  
Existing deep learning-based adversarial detection methods require numerous adversarial images for their training.  ...  In particular, the Gaussian process regression-based detector shows better detection performance than the baseline models for most attacks in the case with fewer adversarial examples.  ...  From the perspective of a system with a deep learning model, it is possible to further secure the system by rejecting the adversarial example input, after determining through adversarial detection whether  ... 
doi:10.1007/s11042-019-7353-6 fatcat:3jlzbd37pzcvneuyis7ai3vkki

Are Generative Classifiers More Robust to Adversarial Attacks? [article]

Yingzhen Li, John Bradshaw, Yash Sharma
2019 arXiv   pre-print
We further develop detection methods for adversarial examples, which reject inputs with low likelihood under the generative model.  ...  There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed.  ...  Introduction Deep neural networks have been shown to be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2014) .  ... 
arXiv:1802.06552v3 fatcat:tyjy55zu7bfkvexaj5vh6u2diq

FADER: Fast Adversarial Example Rejection [article]

Francesco Crecchi, Marco Melis, Angelo Sotgiu, Davide Bacciu, Battista Biggio
2020 arXiv   pre-print
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-crafted inputs that mislead classification at test time.  ...  FADER overcome the issues above by employing RBF networks as detectors: by fixing the number of required prototypes, the runtime complexity of adversarial examples detectors can be controlled.  ...  Deep Neural Reject Sotgiu et al.  ... 
arXiv:2010.09119v1 fatcat:ffotkbvncngkdf6pp4gixhraii

ATRO: Adversarial Training with a Rejection Option [article]

Masahiro Kato, Zhenghang Cui, Yoshihiro Fukuhara
2020 arXiv   pre-print
To this end, various methods are proposed to obtain a classifier that is robust against adversarial examples.  ...  In this paper, in order to acquire a more reliable classifier against adversarial attacks, we propose the method of Adversarial Training with a Rejection Option (ATRO).  ...  Benchmark Test using Neural Networks We compare the ATRO using a deep linear SVM with a plain deep linear SVM with a rejection option without adversarial training.  ... 
arXiv:2010.12905v1 fatcat:6jydra4ssjdr3jo6wqb6knjrea

Detecting Adversarial Examples through Nonlinear Dimensionality Reduction [article]

Francesco Crecchi, Davide Bacciu, Battista Biggio
2019 arXiv   pre-print
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-perturbed inputs aimed to mislead classification.  ...  Our empirical findings show that the proposed approach is able to effectively detect adversarial examples crafted by non-adaptive attackers, i.e., not specifically tuned to bypass the detection method.  ...  Introduction Deep neural networks (DNNs) reach state-of-the-art performances in a wide variety of pattern recognition tasks.  ... 
arXiv:1904.13094v2 fatcat:hd7bpc6fxzfjpan24f5nh2537q

Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid

Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.  ...  It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification.  ...  Despite its impressive performances, recent work has shown how deep neural networks can be fooled by well-crafted adversarial examples affected by a barely-perceivable adversarial noise.  ... 
doi:10.1109/iccvw.2017.94 dblp:conf/iccvw/MelisDB0FR17 fatcat:nzcm4nqh5rep5cuilwcvjd7req

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid [article]

Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli
2017 arXiv   pre-print
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains.  ...  In this work, we aim to evaluate the extent to which robot-vision systems embodying deep-learning algorithms are vulnerable to adversarial examples, and propose a computationally efficient countermeasure  ...  Security Evaluation against Adversarial Examples. We now investigate the security of iCub in the presence of adversarial examples.  ... 
arXiv:1708.06939v1 fatcat:xhzbo7mfjffsbhe7m7z4bk6dsq

Nearest neighbor pattern classification

T. Cover, P. Hart
1967 IEEE Transactions on Information Theory  
We further develop detection methods for adversarial examples, which reject inputs with low likelihood under the generative model.  ...  There is a rising interest in studying the robustness of deep neural network classifiers against adversaries, with both advanced attack and defence techniques being actively developed.  ...  INTRODUCTION Deep neural networks have been shown to be vulnerable to adversarial examples (Szegedy et al., 2013; Goodfellow et al., 2014) .  ... 
doi:10.1109/tit.1967.1053964 fatcat:wwzmy2yd3nb4znbhe7mno7w2bi

Towards Dependable Deep Convolutional Neural Networks (CNNs) with Out-distribution Learning [article]

Mahdieh Abbasi, Arezoo Rajabi, Christian Gagné, Rakesh B. Bobba
2018 arXiv   pre-print
Detection and rejection of adversarial examples in security sensitive and safety-critical systems using deep CNNs is essential.  ...  In this paper, we propose an approach to augment CNNs with out-distribution learning in order to reduce misclassification rate by rejecting adversarial examples.  ...  PROPOSED METHOD It has been argued that a central element explaining the success of deep neural networks is their capacity to learn distributed representations [2] .  ... 
arXiv:1804.08794v2 fatcat:fytfdlk26jd6paq453of7stmgy

Robust Deep Neural Networks Inspired by Fuzzy Logic [article]

Minh Le
2020 arXiv   pre-print
Through experiments on MNIST and CIFAR-10, the new models are shown to be more local, better at rejecting noise samples, and more robust against adversarial examples.  ...  Deep neural networks have achieved impressive performance and become the de-facto standard in many tasks.  ...  Preliminary results show that the proposed models are more well-behaving on noise patterns and more robust against adversarial examples.  ... 
arXiv:1911.08635v3 fatcat:qjkpmpdfbnaqzbwmz3s76pxu7i
« Previous Showing results 1 — 15 out of 4,762 results