Filters








1,353 Hits in 5.4 sec

Challenges and Countermeasures for Adversarial Attacks on Deep Reinforcement Learning [article]

Inaam Ilahi, Muhammad Usama, Junaid Qadir, Muhammad Umar Janjua, Ala Al-Fuqaha, Dinh Thai Hoang, Dusit Niyato
2021 arXiv   pre-print
We first cover some fundamental backgrounds about DRL and present emerging adversarial attacks on machine learning techniques.  ...  Finally, we highlight open issues and research challenges for developing solutions to deal with attacks for DRL-based intelligent systems.  ...  C&W Carlini and Wagner CDG Common Dominant adversarial example Generation c-MARL Cooperative Multi-Agent Reinforcement Learning DRL Deep Reinforcement Learning DDPG Deep Deterministic Policy  ... 
arXiv:2001.09684v2 fatcat:sxl2t2wd5jf2bmk3qxnlrtmrvu

Adversarial Examples: Attacks and Defenses for Deep Learning

Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
2019 IEEE Transactions on Neural Networks and Learning Systems  
Therefore, attacks and defenses on adversarial examples draw great attention.  ...  Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples.  ...  [93] , [94] generated adversarial examples on deep reinforcement learning policies.  ... 
doi:10.1109/tnnls.2018.2886017 pmid:30640631 fatcat:enznysw3svfzdjrmubwkedr6me

Reinforcement Learning-based Design of Side-channel Countermeasures [article]

Jorai Rijsdijk, Lichao Wu, Guilherme Perin, Stjepan Picek
2021 IACR Cryptology ePrint Archive  
We consider several widely adopted hiding countermeasures and use the reinforcement learning paradigm to design specific countermeasures that show resilience against deep learning-based side-channel attacks  ...  Deep learning-based side-channel attacks are capable of breaking targets protected with countermeasures.  ...  If deep learning attacks are the most powerful ones, an intuitive direction should be to design countermeasures against such attacks.  ... 
dblp:journals/iacr/RijsdijkWPP21a fatcat:p3wpgffowfg35afoumeius7p7y

Adversarial Examples: Attacks and Defenses for Deep Learning [article]

Xiaoyong Yuan, Pan He, Qile Zhu, Xiaolin Li
2018 arXiv   pre-print
Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples and explore the challenges and the potential solutions.  ...  Therefore, attacks and defenses on adversarial examples draw great attention.  ...  ACKNOWLEDGMENT The work presented is supported in part by National Science Foundation (grants ACI 1245880, ACI 1229576, CCF-1128805, CNS-1624782), and Florida Center for Cybersecurity seed grant.  ... 
arXiv:1712.07107v3 fatcat:5wcz4h4eijdsdjeqwdpzbfbjeu

SoK: Deep Learning-based Physical Side-channel Analysis [article]

Stjepan Picek, Guilherme Perin, Luca Mariot, Lichao Wu, Lejla Batina
2021 IACR Cryptology ePrint Archive  
We first dissect deep learning-assisted attacks into different phases and map those phases to the efforts conducted so far in the domain.  ...  For each of the phases, we identify the weaknesses and challenges that triggered the known open problems.  ...  Challenge 16: Design countermeasures that achieve broad generalization to cover unknown attack scenarios. Challenge 17: There are many developments for deep learning-based SCA.  ... 
dblp:journals/iacr/PicekPMWB21 fatcat:myc4dyfqofdhpm4fmyrhdp4s6q

Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge [article]

Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu
2020 arXiv   pre-print
However, machine learning is known to be vulnerable to adversarial evasion attacks that manipulate a small number of features to make classifiers wrongly recognize a malware sample as a benign one.  ...  Inspired by the AICS'2019 Challenge, we systematize a number of principles for enhancing the robustness of neural networks against adversarial malware evasion attacks.  ...  How the attacker wages the attack Researchers generate adversarial malware samples using various machine learning techniques such as genetic algorithms, reinforcement learning, generative networks, feed-forward  ... 
arXiv:1812.08108v3 fatcat:4trysg2ipnfj7bspyblvtan2eq

Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification [article]

Songxiang Liu, Haibin Wu, Hung-yi Lee, Helen Meng
2019 arXiv   pre-print
We implement high-performing countermeasure models in the ASVspoof 2019 challenge and conduct adversarial attacks on them.  ...  In this paper, we investigate the vulnerability of spoofing countermeasures for ASV under both white-box and black-box adversarial attacks with the fast gradient sign method (FGSM) and the projected gradient  ...  Recently, ASV systems based on deep learning models require fewer concepts and heuristics compared to traditional speaker verification systems and have achieved considerable performance improvement.  ... 
arXiv:1910.08716v1 fatcat:2cveg2mgrratrewyzdbsjpfg34

RLXSS: Optimizing XSS Detection Model to Defend Against Adversarial Attacks Based on Reinforcement Learning

Yong Fang, Cheng Huang, Yijia Xu, Yang Li
2019 Future Internet  
First, the adversarial samples of the detection model are mined by the adversarial attack model based on reinforcement learning.  ...  With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models.  ...  Acknowledgments: We thank anonymous reviewers and editors for provided helpful comments on earlier drafts of the manuscript. Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/fi11080177 fatcat:c5fcaqq3jjghfiblzvyyu63xqi

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review [article]

Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, Hyoungshick Kim
2020 arXiv   pre-print
This work provides the community with a timely comprehensive review of backdoor attacks and countermeasures on deep learning.  ...  We have also reviewed the flip side of backdoor attacks, which are explored for i) protecting intellectual property of deep learning models, ii) acting as a honeypot to catch adversarial example attacks  ...  We provide a taxonomy of attacks on deep learning to clarify the differences between backdoor attacks and other adversarial attacks, including adversarial examples, universal adversarial patch, and conventional  ... 
arXiv:2007.10760v3 fatcat:6i4za6345jedbe5ek57l2tgld4

Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence [article]

Yi Han, David Hubczenko, Paul Montague, Olivier De Vel, Tamas Abraham, Benjamin I.P. Rubinstein, Christopher Leckie, Tansu Alpcan, Sarah Erfani
2020 arXiv   pre-print
Recent studies have demonstrated that reinforcement learning (RL) agents are susceptible to adversarial manipulation, similar to vulnerabilities previously demonstrated in the supervised learning setting  ...  While most existing work studies the problem in the context of computer vision or console games, this paper focuses on reinforcement learning in autonomous cyber defence under partial observability.  ...  Adversarial Reinforcement Learning It has been shown that reinforcement learning models are also vulnerable to the above attacks against classifiers. For example, Huang et al.  ... 
arXiv:1902.09062v3 fatcat:4qwowgr7a5hsripc2nzvrmps4m

Adversarial Attacks on Cognitive Self-Organizing Networks: The Challenge and the Way Forward [article]

Muhammad Usama, Junaid Qadir, Ala Al-Fuqaha
2018 arXiv   pre-print
In this paper, we explore the effect of adversarial attacks on CSON.  ...  Our experiments highlight the level of threat that CSON have to deal with in order to meet the challenges of next-generation networks and point out promising directions for future work.  ...  Similar ideas based on deep reinforcement learning for learning from environment and experience termed as experience-driven networking are presented in [37] . Feamster et al.  ... 
arXiv:1810.07242v1 fatcat:mqnblp63dfhwdndo3cgn3enf2u

Towards Resilient Artificial Intelligence: Survey and Research Issues [article]

Oliver Eigner, Sebastian Eresheim, Peter Kieseberg, Lukas Daniel Klausner, Martin Pirker, Torsten Priebe, Simon Tjoa, Fiammetta Marulli, Francesco Mercaldo
2021 arXiv   pre-print
Their resilience against attacks and other environmental influences needs to be ensured just like for other IT assets.  ...  Considering the particular nature of AI, and machine learning (ML) in particular, this paper provides an overview of the emerging field of resilient AI and presents research issues the authors identify  ...  Reinforcement Learning. Huang et al. [40] provided the first examples that adversarial attacks are also possible on reinforcement learning (RL) agents.  ... 
arXiv:2109.08904v1 fatcat:vadq2vohljhxpbcokklir4buee

Poisoning attacks and countermeasures in intelligent networks: status quo and prospects

Chen Wang, Jian Chen, Yang Yang, Xiaoqiang Ma, Jiangchuan Liu
2021 Digital Communications and Networks  
In this survey, we comprehensively review existing poisoning attacks as well as the countermeasures in intelligent networks for the first time.  ...  We also highlight some remaining challenges and future directions in the attack-defense confrontation to promote further research in this emerging yet promising area.  ...  According to existing studies, we next discuss poisoning attacks in traditional supervised learning, traditional unsupervised learning, deep learning and reinforcement learning.  ... 
doi:10.1016/j.dcan.2021.07.009 fatcat:36wblmvyyfgifht2ywyalmr52a

Reinforcement Learning for Autonomous Defence in Software-Defined Networking [article]

Yi Han, Benjamin I.P. Rubinstein, Tamas Abraham, Tansu Alpcan, Olivier De Vel, Sarah Erfani, David Hubczenko, Christopher Leckie, Paul Montague
2018 arXiv   pre-print
In addition, we also study the impact of the attack timing, and explore potential countermeasures such as adversarial training.  ...  In particular, we focus on how an RL agent reacts towards different forms of causative attacks that poison its training process, including indiscriminate and targeted, white-box and black-box attacks.  ...  6 overviews previous work on adversarial machine learning (including attacks against reinforcement learning) and existing countermeasures; Section 7 concludes the paper, and offers directions for future  ... 
arXiv:1808.05770v1 fatcat:wiocs64zi5aezazzi4ko7rh43y

Security and Privacy Considerations for Machine Learning Models Deployed in the Government and Public Sector (white paper) [article]

Nader Sehatbakhsh, Ellie Daw, Onur Savas, Amin Hassanzadeh, Ian McCulloh
2020 arXiv   pre-print
As machine learning becomes a more mainstream technology, the objective for governments and public sectors is to harness the power of machine learning to advance their mission by revolutionizing public  ...  We then briefly overview the possible attacks and defense scenarios, and finally, propose recommendations and guidelines that once considered can enhance the security and privacy of the provided services  ...  However, similar attacks can be adopted for unsupervised and/or reinforcement learning systems.  ... 
arXiv:2010.05809v1 fatcat:6hjc6dmsjbcoplecvfx24xa6yi
« Previous Showing results 1 — 15 out of 1,353 results