Filters








6,616 Hits in 6.3 sec

On Sensitivity of Deep Learning Based Text Classification Algorithms to Practical Input Perturbations [article]

Aamir Miyajiwala, Arnav Ladkat, Samiksha Jagadale, Raviraj Joshi
2022 arXiv   pre-print
In this work, we carry out a data-focused study evaluating the impact of systematic practical perturbations on the performance of the deep learning based text classification models like CNN, LSTM, and  ...  Moreover, LSTM is slightly more sensitive to input perturbations as compared to CNN based model.  ...  We would like to express our gratitude towards our mentors at L3Cube for their continuous support and encouragement.  ... 
arXiv:2201.00318v2 fatcat:gzthihz33fg5tnoyuaaizesmju

A Survey on Resilient Machine Learning [article]

Atul Kumar, Sameep Mehta
2017 arXiv   pre-print
Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion  ...  All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs.  ...  Evasion Attacks on Text Based Systems Perturbation techniques for image or audio based system cannot directly work on text based systems.  ... 
arXiv:1707.03184v1 fatcat:qjylw7bvkzbdlbrof5cfpy2jyq

A Survey on Recent Advances in Privacy Preserving Deep Learning

Siran Yin, Leiming Yan, Yuanmin Shi, Yaoyang Hou, Yunhong Zhang
2020 Journal of Information Hiding and Privacy Protection  
Deep learning based on neural networks has made new progress in a wide variety of domain, however, it is lack of protection for sensitive information.  ...  The privacy preserving deep learning aims to solve the above problems.  ...  The model has the generality of deep learning, because it adds perturbations to features, affine transformation layers, and loss functions based on differential privacy, and the practicality of the model  ... 
doi:10.32604/jihpp.2020.010780 fatcat:4443ngibn5dbbkodwlma6u6t2a

Universal Rules for Fooling Deep Neural Networks based Text Classification [article]

Di Li, Danilo Vasconcellos Vargas, Sakurai Kouichi
2019 arXiv   pre-print
Recently, deep learning based natural language processing techniques are being extensively used to deal with spam mail, censorship evaluation in social networks, among others.  ...  Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one.  ...  Unfortunately, there are no studies paying attention to methods or algorithms for generating universal perturbations against DNN-based text classification.  ... 
arXiv:1901.07132v2 fatcat:nsdxdivblvcftasb5rd2lsx65a

Explainable AI: A Review of Machine Learning Interpretability Methods

Pantelis Linardatos, Vasilis Papastefanopoulos, Sotiris Kotsiantis
2020 Entropy  
This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations  ...  This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare.  ...  [163] , both of which proposed generating adversarial examples through text perturbations that are based on the BERT masked language model, as part of the original text is masked and alternative text  ... 
doi:10.3390/e23010018 pmid:33375658 pmcid:PMC7824368 fatcat:gv42gcovm5cxzl2kmdsluiegdi

Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey [article]

Wei Emma Zhang, Quan Z. Sheng, Ahoud Alhazmi, Chenliang Li
2019 arXiv   pre-print
However, existing perturbation methods for images cannotbe directly applied to texts as text data is discrete.  ...  With the development of high computational devices, deep neural networks (DNNs), in recent years, have gained significant popularity in many Artificial Intelligence (AI) applications.  ...  The paper covers the works from pioneering non-deep leaning algorithms to recent deep learning algorithms.  ... 
arXiv:1901.06796v3 fatcat:gfh4gzkvn5djpdkn7k63xlqahm

Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers [article]

Ji Gao, Jack Lanchantin, Mary Lou Soffa, Yanjun Qi
2018 arXiv   pre-print
In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input.  ...  Our experimental results indicate that DeepWordBug reduces the prediction accuracy of current state-of-the-art deep-learning models, including a decrease of 68\% on average for a Word-LSTM model and 48  ...  It naturally raises concerns about the robustness of deep learning systems, considering that they have become core components of many security-sensitive applications such as text-based spam detection.  ... 
arXiv:1801.04354v5 fatcat:y3mdfslcjrd4re34jo7r5vgfxe

Review of Artificial Intelligence Adversarial Attack and Defense Technologies

Shilin Qiu, Qihe Liu, Shijie Zhou, Chunjiang Wu
2019 Applied Sciences  
This paper aims to comprehensively summarize the latest research progress on adversarial attack and defense technologies in deep learning.  ...  However, artificial intelligence systems are vulnerable to adversarial attacks, which limit the applications of artificial intelligence (AI) technologies in key security fields.  ...  Conflicts of Interest: The authors declare no conflict of interest. Appl. Sci. 2019, 9, 909  ... 
doi:10.3390/app9050909 fatcat:u4if4uweqzc6tfdrc3kokckkua

Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid [article]

Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli
2017 arXiv   pre-print
to mitigate this threat, based on rejecting classification of anomalous inputs.  ...  assumption of learning algorithms.  ...  In this work, we are the first to show that robot-vision systems based on deep learning algorithms are also vulnerable to this potential threat.  ... 
arXiv:1708.06939v1 fatcat:xhzbo7mfjffsbhe7m7z4bk6dsq

Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid

Marco Melis, Ambra Demontis, Battista Biggio, Gavin Brown, Giorgio Fumera, Fabio Roli
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
to mitigate this threat, based on rejecting classification of anomalous inputs.  ...  assumption of learning algorithms.  ...  In this work, we are the first to show that robot-vision systems based on deep learning algorithms are also vulnerable to this potential threat.  ... 
doi:10.1109/iccvw.2017.94 dblp:conf/iccvw/MelisDB0FR17 fatcat:nzcm4nqh5rep5cuilwcvjd7req

A survey in Adversarial Defences and Robustness in NLP [article]

Shreya Goyal, Sumanth Doddapaneni, Mitesh M.Khapra, Balaraman Ravindran
2022 arXiv   pre-print
In recent years, it has been seen that deep neural networks are lacking robustness and are likely to break in case of adversarial perturbations in input data.  ...  However, numerous methods for adversarial defense are proposed of late, for different NLP tasks such as text classification, named entity recognition, natural language inferencing, etc.  ...  INTRODUCTION In recent times, deep learning algorithms in natural language processing (NLP) have taken the area to a new level.  ... 
arXiv:2203.06414v2 fatcat:2ukd44px35e7ppskzkaprfw4ha

Quantifying Explainability in NLP and Analyzing Algorithms for Performance-Explainability Tradeoff [article]

Mitchell Naylor, Christi French, Samantha Terker, Uday Kamath
2021 arXiv   pre-print
The healthcare domain is one of the most exciting application areas for machine learning, but a lack of model transparency contributes to a lag in adoption within the industry.  ...  In this work, we explore the current art of explainability and interpretability within a case study in clinical text classification, using a task of mortality prediction within MIMIC-III clinical notes  ...  Acknowledgements The authors would like to thank the organizers of the Interpretable ML in Healthcare workshop and ICML, as well as the anonymous reviewers for their feedback and advice.  ... 
arXiv:2107.05693v1 fatcat:jcwm7z5fufevhje4n3ep4quooa

Black-Box Adversarial Entry in Finance through Credit Card Fraud Detection

Akshay Agarwal, Nalini K. Ratha
2021 International Conference on Information and Knowledge Management  
However, very limited attention has been given to other sets of inputs such as speech, text, and tabular data.  ...  Apart from that through the perturbation on individual features, it is shown which column feature is more or less sensitive for the incorrect classification of the classifier.  ...  The sensitivity of machine learning algorithms towards minute perturbations in other domains [13] requires that ML algorithms used for tabular databases are secure to ensures the correct decision.  ... 
dblp:conf/cikm/0001R21 fatcat:cv647wb5ijgjldx4tx4phvfj4a

Towards Security Threats of Deep Learning Systems: A Survey [article]

Yingzhe He and Guozhu Meng and Kai Chen and Xingbo Hu and Jinwen He
2020 arXiv   pre-print
In order to unveil the security weaknesses and aid in the development of a robust deep learning system, we undertake an investigation on attacks towards deep learning, and analyze these attacks to conclude  ...  In particular, we focus on four types of attacks associated with security threats of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack.  ...  Let F be the deep learning model and it computes the corresponding outcomes y based on the given input x, i.e., y = F (x). y t is the true label of input x.  ... 
arXiv:1911.12562v2 fatcat:m3lyece44jgdbp6rlcpj6dz2gm

Machine Learning Security: Threats, Countermeasures, and Evaluations

Mingfu Xue, Chengxiang Yuan, Heyi Wu, Yushu Zhang, Weiqiang Liu
2020 IEEE Access  
Moreover, such attacks are stealthy due to the unexplained nature of the deep learning models.  ...  Instead of focusing on one stage or one type of attack, this paper covers all the aspects of machine learning security from the training phase to the test phase.  ...  The initial works on adversarial examples aim at analyzing the sensitivity of deep learning algorithms to minimal perturbations.  ... 
doi:10.1109/access.2020.2987435 fatcat:ksinvcvcdvavxkzyn7fmsa27ji
« Previous Showing results 1 — 15 out of 6,616 results