Filters








832 Hits in 6.6 sec

Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors [article]

Gerda Bortsova, Cristina González-Gonzalo, Suzanne C. Wetstein, Florian Dubost, Ioannis Katramados, Laurens Hogeweg, Bart Liefers, Bram van Ginneken, Josien P.W. Pluim, Mitko Veta, Clara I. Sánchez, Marleen de Bruijne
2021 arXiv   pre-print
In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology.  ...  Medical image analysis (MedIA) systems have recently been argued to be vulnerable to adversarial attacks due to strong financial incentives and the associated technological infrastructure.  ...  In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology.  ... 
arXiv:2006.06356v3 fatcat:aoj7sator5hc3lkl6jttrdj5y4

Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors

Gerda Bortsova, Cristina González-Gonzalo, Suzanne C. Wetstein, Florian Dubost, Ioannis Katramados, Laurens Hogeweg, Bart Liefers, Bram van Ginneken, Josien P.W. Pluim, Mitko Veta, Clara I. Sánchez, Marleen de Bruijne
2021 Medical Image Analysis  
In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology.  ...  Medical image analysis (MedIA) systems have recently been argued to be vulnerable to adversarial attacks due to strong financial incentives and the associated technological infrastructure.  ...  In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology.  ... 
doi:10.1016/j.media.2021.102141 pmid:34246850 fatcat:lwkiukekkjgbfp7hi2eqkwuvva

Holistic Approach to Measure Sample-level Adversarial Vulnerability and its Utility in Building Trustworthy Systems [article]

Gaurav Kumar Nayak, Ruchit Rawal, Rohit Lal, Himanshu Patil, Anirban Chakraborty
2022 arXiv   pre-print
It has been observed that adversarial attacks often corrupt the high-frequency components of the input image.  ...  Adversarial attack perturbs an image with an imperceptible noise, leading to incorrect model prediction.  ...  Vulnerability towards adversarial examples (adversarial vulnerability) in state-of-the-art DNNs is particularly worrisome in safety-critical applications such as medical imaging [1, 26] , facial recognition  ... 
arXiv:2205.02604v1 fatcat:b2cgqowdgjadde3reddmjcjkh4

Kryptonite: An Adversarial Attack Using Regional Focus [chapter]

Yogesh Kulkarni, Krisha Bhambani
2021 Lecture Notes in Computer Science  
In this paper, we present a novel study analyzing the weaknesses in the security of deep learning systems. We propose 'Kryptonite', an adversarial attack on images.  ...  We explicitly extract the Region of Interest (RoI) for the images and use it to add imperceptible adversarial perturbations to images to fool the DNN.  ...  The inspiration for this attack is possible unexplored vulnerability of perturbations in a region of interest, since most adversarial attacks are fairly antagonistic to it.  ... 
doi:10.1007/978-3-030-81645-2_26 fatcat:5yp533stwbghhhvw7l2grub6iy

Towards Robust General Medical Image Segmentation [article]

Laura Daza, Juan C. Pérez, Pablo Arbeláez
2021 arXiv   pre-print
We propose a new framework to assess the robustness of general medical image segmentation systems.  ...  Several attacks and defenses have been proposed to improve the performance of Deep Neural Networks under the presence of adversarial noise in the natural image domain.  ...  Acknowledgements: we thank Amazon Web Services (AWS) for a computational research grant used for the development of this project.  ... 
arXiv:2107.04263v1 fatcat:qfthhn5llfa4zichqjnqi7ogei

Two Sides of the Same Coin: Boons and Banes of Machine Learning in Hardware Security

Wenye Liu, Chip-Hong Chang, Xueyang Wang, Chen Liu, Jason Fung, Mohammad Ebrahimabadi, Naghmeh Karimi, Xingyu Meng, Kanad Basu
2021 IEEE Journal on Emerging and Selected Topics in Circuits and Systems  
Accordingly, due to the opportunities of ML-assisted security and the vulnerabilities of ML implementation, in this paper, we will survey the applications, vulnerabilities and fortification of ML from  ...  On the other hand, ML-based approaches have also been adopted by adversaries to assist side-channel attacks, reverse engineer integrated circuits and break hardware security primitives like Physically  ...  Some of the attacked image samples are shown in Fig. 11 .  ... 
doi:10.1109/jetcas.2021.3084400 fatcat:c4wdkghpo5fwbhvkekaysnahzm

Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions [article]

Izzat Alsmadi, Kashif Ahmad, Mahmoud Nazzal, Firoj Alam, Ala Al-Fuqaha, Abdallah Khreishah, Abdulelah Algosaibi
2021 arXiv   pre-print
These vulnerabilities allow adversaries to launch a diversified set of adversarial attacks on these algorithms in different applications of social media text processing.  ...  However, these MLand NLP algorithms have been widely shown to be vulnerable to adversarial attacks.  ...  Acknowledgment The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education in Saudi Arabia for funding this research work through the project number 1120.  ... 
arXiv:2110.13980v1 fatcat:e373if4sszed7i4owzwiabmzxu

Man-At-The-End attacks: Analysis, taxonomy, human aspects, motivation and future directions

Adnan Akhunzada, Mehdi Sookhak, Nor Badrul Anuar, Abdullah Gani, Ejaz Ahmed, Muhammad Shiraz, Steven Furnell, Amir Hayat, Muhammad Khurram Khan
2015 Journal of Network and Computer Applications  
Moreover, the paper elaborates the concept of MATE attacks, the different forms, and the analysis of MATE versus insider threats to present a thematic taxonomy of a MATE attack.  ...  The main objective of the paper is to mitigate the consequences of MATE attacks through the human element of security and highlight the need for this element to form a part of a holistic security strategy  ...  off-the-shelf systems include a plenitude of known and unpatched software vulnerabilities.  ... 
doi:10.1016/j.jnca.2014.10.009 fatcat:kp3dqvfqhbewhlzz67r3xidgx4

Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks [article]

Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis
2022 arXiv   pre-print
reconstructed image to data that is controlled by an adversary.  ...  One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model to extract representations and thus disclose the training data.  ...  and to facilitate the development of privacy-preserving machine learning systems.  ... 
arXiv:2203.00481v1 fatcat:grdkb4qqjvbojf6ujqovniyysq

A Survey of Honeypots and Honeynets for Internet of Things, Industrial Internet of Things, and Cyber-Physical Systems [article]

Javier Franco, Ahmet Aris, Berk Canberk, A. Selcuk Uluagac
2021 arXiv   pre-print
However, they have become popular targets of attacks, due to their inherent limitations which create vulnerabilities.  ...  It provides a taxonomy and extensive analysis of the existing honeypots and honeynets, states key design factors for the state-of-the-art honeypot/honeynet research and outlines open issues for future  ...  Although research honeypots are important to understand the attacks and new tactics of attackers, they do not actively participate in securing an IoT, IIoT, or CPS environment.  ... 
arXiv:2108.02287v1 fatcat:l4b23mylyfd6xjtfhrtsrin3zq

Quantum Adversarial Machine Learning [article]

Sirui Lu, Lu-Ming Duan, Dong-Ling Deng
2019 arXiv   pre-print
We find that, similar to traditional classifiers based on classical neural networks, quantum learning systems are likewise vulnerable to crafted adversarial examples, independent of whether the input data  ...  Our results uncover the notable vulnerability of quantum machine learning systems to adversarial perturbations, which not only reveals a novel perspective in bridging machine learning and quantum physics  ...  possible medical disaster.  ... 
arXiv:2001.00030v1 fatcat:p67vurfb3jbdveehdaugibqqpm

Taxonomy and Challenges of Out-of-Band Signal Injection Attacks and Defenses [article]

Ilias Giechaskiel, Kasper Bonne Rasmussen
2019 arXiv   pre-print
Out-of-band signal injection attacks thus pose previously-unexplored security risks by exploiting hardware imperfections in the sensors themselves, or in their interfaces to microcontrollers.  ...  Overall, the ever-increasing reliance on sensors embedded in everyday commodity devices necessitates that a stronger focus be placed on improving the security of such systems against out-of-band signal  ...  One of these commonalities is that the vulnerability of systems to out-of-band signal injection attacks depends on both: (a) how adversarial signals are received by the devices under attack; and (b) how  ... 
arXiv:1901.06935v3 fatcat:r774b46irvd6hc5dmtd42hksay

Taxonomy and Challenges of Out-of-Band Signal Injection Attacks and Defenses

Ilias Giechaskiel, Kasper B. Rasmussen
2019 IEEE Communications Surveys and Tutorials  
Out-of-band signal injection attacks thus pose previously-unexplored security risks by exploiting hardware imperfections in the sensors themselves, or in their interfaces to microcontrollers.  ...  Overall, the ever-increasing reliance on sensors embedded in everyday commodity devices necessitates that a stronger focus be placed on improving the security of such systems against out-of-band signal  ...  One of these commonalities is that the vulnerability of systems to out-of-band signal injection attacks depends on both: (a) how adversarial signals are received by the devices under attack; and (b) how  ... 
doi:10.1109/comst.2019.2952858 fatcat:injy5gxjuzhxzjs5mxqjkgp6zy

When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks [article]

Chao-Han Huck Yang, Yi-Chieh Liu, Pin-Yu Chen, Xiaoli Ma, Yi-Chang James Tsai
2019 arXiv   pre-print
We further provide the first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods [%s].  ...  Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.  ...  The importance of reasoning is also motivated by demystifying the vulnerable properties of DNNs against adversarial examples [8, 9, 10] .  ... 
arXiv:1902.03380v3 fatcat:duh3oiyxazbmrjwmlkfoi2xll4

Hand-based multibiometric systems: state-of-the-art and future challenges

Anum Aftab, Farrukh Aslam Khan, Muhammad Khurram Khan, Haider Abbas, Waseem Iqbal, Farhan Riaz
2021 PeerJ Computer Science  
We cover the existing multibiometric systems in the context of various feature extraction schemes, along with an analysis of their performance using one of the performance measures used for biometric systems  ...  The traditional methods used for the identification of individuals such as personal identification numbers (PINs), identification tags, etc., are vulnerable as they are easily compromised by the hackers  ...  The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.  ... 
doi:10.7717/peerj-cs.707 pmid:34712793 pmcid:PMC8507475 fatcat:aejhjsxfbnhtzeiyggynamkbim
« Previous Showing results 1 — 15 out of 832 results