Filters








207 Hits in 4.6 sec

Deep Learning-Based Autonomous Driving Systems: A Survey of Attacks and Defenses [article]

Yao Deng, Tiehua Zhang, Guannan Lou, Xi Zheng, Jiong Jin, Qing-Long Han
2021 arXiv   pre-print
The analysis is unrolled by taking an in-depth overview of each step in the ADS workflow, covering adversarial attacks for various deep learning models and attacks in both physical and cyber context.  ...  However, ADSs are still plagued by increasing threats from different attacks, which could be categorized into physical attacks, cyberattacks and learning-based adversarial attacks.  ...  than adversarial attacks, reverse-engineering attacks on ADSs are another possible research direction.  ... 
arXiv:2104.01789v2 fatcat:zekeddt7zzcnrphu3f4yw6vzii

OSINT-Based LPC-MTD and HS-Decoy for Organizational Defensive Deception

Sang Seo, Dohoon Kim
2021 Applied Sciences  
., they do not consider security characterization of each organizational social engineering attack and related utilization plans, no quantitative deception modeling is performed for the attenuation of  ...  We present the concept of an open-source intelligence (OSINT)-based hierarchical social engineering decoy (HS-Decoy) strategy while considering the actual fingerprint of each organization.  ...  Conflicts of Interest: The authors declare no conflicts of interest.  ... 
doi:10.3390/app11083402 fatcat:h5dn6kwjffc5vjfoia2xx566mq

Identification of Attack-Specific Signatures in Adversarial Examples [article]

Hossein Souri, Pirazh Khorramshahi, Chun Pong Lau, Micah Goldblum, Rama Chellappa
2021 arXiv   pre-print
The adversarial attack literature contains a myriad of algorithms for crafting perturbations which yield pathological behavior in neural networks.  ...  Then, we leverage recent advances in parameter-space saliency maps to show, both visually and quantitatively, that adversarial attack algorithms differ in which parts of the network and image they target  ...  This network yields the accuracy of 94.23%. Figure 3 . 3 Adversarial Perturbation Recovery via Reverse Engineering of Deceptions via Residual Learning (REDRL) Pipeline.  ... 
arXiv:2110.06802v1 fatcat:wz4sex6kfjgo7ggvwn4adr2kdi

A Deception Model Robust to Eavesdropping over Communication for Social Network Systems

Abiodun Esther Omolara, Aman Jantan, Oludare Isaac Abiodun, Kemi Victoria Dada, Humaira Arshad, Etuh Emmanuel
2019 IEEE Access  
The result shows that the proposed model reinforces state-of-the-art encryption schemes and will serve as an effective component for discouraging eavesdropping and curtailing brute-force attack on encrypted  ...  To this end, the objective of this research is to reinforce the current encryption measures with a decoy-based deception model where the eavesdropper is discouraged from stealing encrypted message by confounding  ...  , functional comparison with the current deception-based model for IM system, generic evaluation of the model for an attacker with side information.  ... 
doi:10.1109/access.2019.2928359 fatcat:oik4kmscf5czpbbfd726c23roe

Machine Learning Security: Threats, Countermeasures, and Evaluations

Mingfu Xue, Chengxiang Yuan, Heyi Wu, Yushu Zhang, Weiqiang Liu
2020 IEEE Access  
First, the machine learning model in the presence of adversaries is presented, and the reasons why machine learning can be attacked are analyzed.  ...  INDEX TERMS Artificial intelligence security, poisoning attacks, backdoor attacks, adversarial examples, privacy-preserving machine learning.  ...  Lowd and Meek [40] introduce the adversarial learning problem, in which the adversary tries to reverse engineering the classifier through sending a number of queries.  ... 
doi:10.1109/access.2020.2987435 fatcat:ksinvcvcdvavxkzyn7fmsa27ji

Adversarial Machine Learning in Text Processing: A Literature Survey

Izzat Alsmadi, Nura Aljaafari, Mahmoud Nazzal, Shadan Alhamed, Ahmad H. Sawalmeh, Conrado P. Vizcarra, Abdallah Khreishah, Muhammad Anan, Abdulelah Algosaibi, Mohammed Abdulaziz Al-Naeem, Adel Aldalbahi, Abdulaziz Al-Humam
2022 IEEE Access  
In this paper, we surveyed major subjects in adversarial machine learning for text processing applications.  ...  We focused on some of the evolving research areas such as: malicious versus genuine text generation metrics, defense against adversarial attacks, and text generation models and algorithms.  ...  Reference [4] uses Dada Engine to generate masquerade emails by fine-tuning the grammar of Dada Engine with respect to the original author's main stylistic elements while inducing content deception that  ... 
doi:10.1109/access.2022.3146405 fatcat:emahpmjqmnbjpbhptrrtrjlja4

A Survey of Machine Learning Techniques in Adversarial Image Forensics [article]

Ehsan Nowroozi, Ali Dehghantanha, Reza M. Parizi, Kim-Kwang Raymond Choo
2020 arXiv   pre-print
However, there are also a number of limitations and vulnerabilities associated with machine learning-based approaches, for example how to detect adversarial (image) examples, with real-world consequences  ...  Therefore, with a focus on image forensics, this paper surveys techniques that can be used to enhance the robustness of machine learning-based binary manipulation detectors in various adversarial scenarios  ...  Acknowledgements The first author thanks members of the Visual Information Processing and Protection (VIPP) group at the University of Siena, Italy for their suggestions.  ... 
arXiv:2010.09680v1 fatcat:qzvolq6kvrggfbyg23wrcnykza

Playing to Learn Better: Repeated Games for Adversarial Learning with Multiple Classifiers [article]

Prithviraj Dasgupta, Joseph B. Collins, Michael McCarrick
2020 arXiv   pre-print
We propose a game theory-based technique called a Repeated Bayesian Sequential Game where the learner interacts repeatedly with a model of the adversary using self play to determine the distribution of  ...  It then strategically selects a classifier from a set of pre-trained classifiers that balances the likelihood of correct prediction for the query along with reducing the costs to use the classifier.  ...  For generating adversarial text, we used the single character gradient based replacement technique (Liang et al. 2018) .  ... 
arXiv:2002.03924v1 fatcat:5oddd6bx5vat3i2cbob2hvk3z4

Applications in Security and Evasions in Machine Learning: A Survey

Ramani Sagar, Rutvij Jhaveri, Carlos Borrego
2020 Electronics  
Moreover, we illustrate the adversarial attacks based on the attackers' knowledge about the model and addressed the point of the model at which possible attacks may be committed.  ...  Finally, we also investigate different types of properties of the adversarial attacks.  ...  Conflicts of Interest: The authors declare no conflict of interest.  ... 
doi:10.3390/electronics9010097 fatcat:ttmpehdctjhbdk7arxgczl6224

Online Social Deception and Its Countermeasures for Trustworthy Cyberspace: A Survey [article]

Zhen Guo, Jin-Hee Cho, Ing-Ray Chen, Srijan Sengupta, Michin Hong, Tanushree Mitra
2020 arXiv   pre-print
In this paper, we conducted an extensive survey, covering (i) the multidisciplinary concepts of social deception; (ii) types of OSD attacks and their unique characteristics compared to other social network  ...  Based on this survey, we provide insights into the effectiveness of countermeasures and the lessons from existing literature.  ...  Albladi and Weir [6] analyzed various user characteristics, such as a level of involvement, for vulnerability of social engineering attacks.  ... 
arXiv:2004.07678v1 fatcat:k4a6siywefb6lhkmyn67lmoqwe

Artificial Intelligence in the Cyber Domain: Offense and Defense

Thanh Cong Truong, Quoc Bao Diep, Ivan Zelinka
2020 Symmetry  
In the cybersecurity domain, AI-based techniques can provide better cyber defense tools and help adversaries improve methods of attack.  ...  However, malicious actors are aware of the new prospects too and will probably attempt to use them for nefarious purposes.  ...  On this occasion, the attacker learns how ML algorithms work by reversing techniques. From this knowledge, the malicious actors know what the detector engines are looking for and how to avoid it.  ... 
doi:10.3390/sym12030410 fatcat:7gyse3gaxjguhgkvfnbi7knkf4

Advances and Open Problems in Federated Learning [article]

Peter Kairouz, H. Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G.L. D'Oliveira, Hubert Eichner (+47 others)
2021 arXiv   pre-print
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science  ...  Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.  ...  Acknowledgments The authors would like to thank Alex Ingerman and David Petrou for their useful suggestions and insightful comments during the review process.  ... 
arXiv:1912.04977v3 fatcat:efkbqh4lwfacfeuxpe5pp7mk6a

Evaluating the Impact of Malware Analysis Techniques for Securing Web Applications through a Decision-Making Framework under Fuzzy Environment

Rajeev Kumar, Babasaheb Bhimrao Ambedkar University, Mamdouh Alenezi, Md Ansari, Bineet Gupta, Alka Agrawal, Raees Khan, Prince Sultan University, Babasaheb Bhimrao Ambedkar University, Shri Ramswaroop Memorial University, Babasaheb Bhimrao Ambedkar University, Babasaheb Bhimrao Ambedkar University
2020 International Journal of Intelligent Engineering and Systems  
The findings of the study show that the Reverse Engineering approach is the most efficient technique for analyzing complex malware.  ...  Nowadays, most of the cyber-attacks are initiated by extremely malicious programs known as Malware. Malwares are very vigorous and can penetrate the security of information and communication systems.  ...  Acknowledgments Authors are grateful to the Prince Sultan University, Saudi Arabia, for sponsoring this research quest.  ... 
doi:10.22266/ijies2020.1231.09 fatcat:vucvo7nmoraczh5hpaoate6onq

Thermonuclear Cyberwar

Erik Gartzke
2016 Social Science Research Network  
For the most part, nuclear actors can openly advertise their weapons to signal the costs of aggression to potential adversaries, thereby reducing the danger of misperception and war.  ...  When combined, the warfighting advantages of cyber operations become dangerous liabilities for nuclear deterrence.  ...  Two years later, the Principal Deputy Under Secretary of Defense for Research and Engineering released a broad-based, multiservice report that doubled down on SAC's findings: "the United States could not  ... 
doi:10.2139/ssrn.2836208 fatcat:wlipixfr6bh7fh75mwkxap7ake

Thermonuclear cyberwar

Erik Gartzke, Jon R. Lindsay
2017 Journal of Cybersecurity  
For the most part, nuclear actors can openly advertise their weapons to signal the costs of aggression to potential adversaries, thereby reducing the danger of misperception and war.  ...  When combined, the warfighting advantages of cyber operations become dangerous liabilities for nuclear deterrence.  ...  Two years later, the Principal Deputy Under Secretary of Defense for Research and Engineering released a broad-based, multiservice report that doubled down on SAC's findings: "the United States could not  ... 
doi:10.1093/cybsec/tyw017 dblp:journals/cybersecurity/GartzkeL17 fatcat:ff2dneoyyfbd5f4ltrczyifd2u
« Previous Showing results 1 — 15 out of 207 results