Filters








866 Hits in 3.5 sec

An Empirical Evaluation of Adversarial Examples Defences, Combinations and Robustness Scores

Aleksandar Jankovic, Rudolf Mayer
2022 Proceedings of the 2022 ACM on International Workshop on Security and Privacy Analytics  
In this paper, we therefore evaluate several state-of-the-art whiteand black-box adversarial attacks against Convolutional Neural Networks for image recognition, for various attack targets.  ...  Our results indicate that pre-processors are very effective against attacks with adversarial examples that are very close to the original images, that combinations can improve the defence strength, and  ...  Future work will focus on extending these experiments to more datasets, models, and robustness scores, and include further defences for combination.  ... 
doi:10.1145/3510548.3519370 fatcat:d7goyg2qo5cljecuneq74eazfi

Vax-a-Net: Training-time Defence Against Adversarial Patch Attacks [article]

T. Gittings, S. Schneider, J. Collomosse
2020 arXiv   pre-print
We present Vax-a-Net; a technique for immunizing convolutional neural networks (CNNs) against adversarial patch attacks (APAs).  ...  We introduce a conditional Generative Adversarial Network (GAN) architecture that simultaneously learns to synthesise patches for use in APAs, whilst exploiting those attacks to adapt a pre-trained target  ...  Hayes [11] created a different method to defend against localised adversarial attacks. The defence is split into two stages: detection and removal.  ... 
arXiv:2009.08194v1 fatcat:7ipd32ldnjg7nj4sm7jaske6n4

When Machine Learning Meets Privacy: A Survey and Outlook [article]

Bo Liu, Ming Ding, Sina Shaham, Wenny Rahayu, Farhad Farokhi, Zihuai Lin
2020 arXiv   pre-print
This paper surveys the state of the art in privacy issues and solutions for machine learning.  ...  survey covers three categories of interactions between privacy and machine learning: (i) private machine learning, (ii) machine learning aided privacy protection, and (iii) machine learning-based privacy attack  ...  In split learning, "client-side communication costs are significantly reduced as the data to be transmitted is restricted to first few layers of the split neural network prior to the split".  ... 
arXiv:2011.11819v1 fatcat:xuyustzlbngo3ivqkc4paaer5q

When Machine Learning Meets Privacy

Bo Liu, Ming Ding, Sina Shaham, Wenny Rahayu, Farhad Farokhi, Zihuai Lin
2021 ACM Computing Surveys  
This article surveys the state of the art in privacy issues and solutions for machine learning.  ...  survey covers three categories of interactions between privacy and machine learning: (i) private machine learning, (ii) machine learning-aided privacy protection, and (iii) machine learning-based privacy attack  ...  In split learning, "client-side communication costs are significantly reduced as the data to be transmitted is restricted to first few layers of the split neural network prior to the split".  ... 
doi:10.1145/3436755 fatcat:cbkbmxj7krc3xoedv6tan4fle4

Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning [article]

Chuan Ma, Jun Li, Kang Wei, Bo Liu, Ming Ding, Long Yuan, Zhu Han, H. Vincent Poor
2022 arXiv   pre-print
against such threats.  ...  We explore and analyze the potential of threats for each information exchange level based on an overview of the current state-of-the-art attack mechanisms, and then discuss the possible defense methods  ...  induce further security issues. 2) Split Learning: Split learning, as a type of distributed deep learning [23] , [49] - [51] , has an another name of split neural network (SplitNN).  ... 
arXiv:2202.09027v2 fatcat:hlu7bopcjrc6zjn2pct57utufy

Estimating the Circuit Deobfuscating Runtime based on Graph Deep Learning [article]

Zhiqian Chen, Gaurav Kolhe, Setareh Rafatirad, Sai Manoj P. D., Houman Homayoun, Liang Zhao, Chang-Tien Lu
2020 arXiv   pre-print
for deobfuscation.  ...  ., logic gates whose functionality cannot be precisely determined by the attacker.  ...  The key to the defence against deobfuscation is the speed.  ... 
arXiv:1902.05357v2 fatcat:cmqk5xqqnzhq5gvxpypjfbojqi

A Systematic Review on Machine Learning and Deep Learning Models for Electronic Information Security in Mobile Networks

Chaitanya Gupta, Ishita Johri, Kathiravan Srinivasan, Yuh-Chung Hu, Saeed Mian Qaisar, Kuo-Yi Huang
2022 Sensors  
We address the necessity to develop new approaches to provide high security of electronic data in mobile networks because the possibilities for increasing mobile network security are inexhaustible.  ...  According to the research, an artificial intelligence-based security model should assure the secrecy, integrity, and authenticity of the system, its equipment, and the protocols that control the network  ...  Decision trees work by checking the data against an entropy measure to determine the best split for each node.  ... 
doi:10.3390/s22052017 pmid:35271163 pmcid:PMC8915055 fatcat:6khxq7pkyzgifdos7ifcyqsmgi

A Survey of Machine Learning Algorithms for Detecting Malware in IoT Firmware [article]

Erik Larsen, Korey MacVittie, John Lilly
2021 arXiv   pre-print
Attacks against such devices can go unnoticed, and users can become a weak point in security. Malware can cause DDoS attacks and even spy on sensitive areas like peoples' homes.  ...  Deep learning approaches including Convolutional and Fully Connected Neural Networks with both experimental and proven successful architectures are also explored.  ...  Technical Fellows program for encouraging and assisting this research.  ... 
arXiv:2111.02388v1 fatcat:ssdpzczw5bhkzl5elstrxnixqm

Robust Sensor Fusion Algorithms Against Voice Command Attacks in Autonomous Vehicles [article]

Jiwei Guan, Xi Zheng, Chen Wang, Yipeng Zhou, Alireza Jolfa
2021 arXiv   pre-print
In this paper, we aim to develop a more practical solution by using camera views to defend against inaudible command attacks where ADAS are capable of detecting their environment via multi-sensors.  ...  Code is available at https://github.com/ITSEG-MQ/Sensor-Fusion-Against-VoiceCommand-Attacks.  ...  Audio Adversarial Defense To defend against these audio attacks, there are some existing defences.  ... 
arXiv:2104.09872v3 fatcat:orgbuv7b4reyzpv76s5amtnxzy

Adversarial Attacks and Defences: A Survey [article]

Anirban Chakraborty and Manaar Alam and Vishal Dey and Anupam Chattopadhyay and Debdeep Mukhopadhyay
2018 arXiv   pre-print
against them.  ...  In this paper, we attempt to provide a detailed discussion on different types of adversarial attacks with various threat models and also elaborate the efficiency and challenges of recent countermeasures  ...  The authors [46] proposed three different training methods for HGD, illustrated in Figure. 17. In FGD (Feature Guided Denoiser), they defined l = −2 as the index for the topmost convolutional  ... 
arXiv:1810.00069v1 fatcat:f3mp5jo3bncn3lppqjl2oyxlim

A survey on adversarial attacks and defences

Anirban Chakraborty, Manaar Alam, Vishal Dey, Anupam Chattopadhyay, Debdeep Mukhopadhyay
2021 CAAI Transactions on Intelligence Technology  
against them.  ...  Herein, the authors attempt to provide a detailed discussion on different types of adversarial attacks with various threat models and also elaborate on the efficiency and challenges of recent countermeasures  ...  We would also like to acknowledge the Haldia Petrochemicals Ltd. and TCG Foundation for the research grant entitled Cyber Security Research in CPS.  ... 
doi:10.1049/cit2.12028 fatcat:iedjaoomardbpchx2ffhcc7zjq

Integer-arithmetic-only Certified Robustness for Quantized Neural Networks [article]

Haowen Lin, Jian Lou, Li Xiong, Cyrus Shahabi
2021 arXiv   pre-print
robustness against adversarial perturbations.  ...  These defensive models cannot run efficiently on edge devices nor be deployed on integer-only logical units such as Turing Tensor Cores or integer-only ARM processors.  ...  To defend against these attacks, several works have proposed to develop defensive techniques and improve the robustness of deep neural networks [1, 28, 26, 41] .  ... 
arXiv:2108.09413v1 fatcat:wldpffzsxzfnzf7v6o6iesn3hy

MaskedNet: The First Hardware Inference Engine Aiming Power Side-Channel Protection [article]

Anuj Dubey, Rosario Cammarota, Aydin Aysu
2019 arXiv   pre-print
First, it shows DPA attacks during inference to extract the secret model parameters such as weights and biases of a neural network.  ...  Expanding side-channel analysis to Machine Learning Model extraction, however, is largely unexplored. This paper expands the DPA framework to neural-network classifiers.  ...  ACKNOWLEDGEMENTS We thank the anonymous reviewers of HOST for their valuable feedback and to Itamar Levi for helpful discussions.  ... 
arXiv:1910.13063v3 fatcat:iand6q5qb5g2lpgtbsh6bnlmfy

Rethinking Privacy Preserving Deep Learning: How to Evaluate and Thwart Privacy Attacks [article]

Lixin Fan, Kam Woh Ng, Ce Ju, Tianyu Zhang, Chang Liu, Chee Seng Chan, Qiang Yang
2020 arXiv   pre-print
This paper investigates capabilities of Privacy-Preserving Deep Learning (PPDL) mechanisms against various forms of privacy attacks.  ...  Third, based on theoretical analysis, a novel Secret Polarization Network (SPN) is proposed to thwart privacy attacks, which pose serious challenges to existing PPDL methods.  ...  Appendix A: Proofs of Reconstruction Attacks Consider a neural network Ψ(x; w, b) : X → R C , where x ∈ X , w and b are the weights and biases of neural networks, and C is the output dimension.  ... 
arXiv:2006.11601v2 fatcat:mmxutdizifgqppze27yitoufcm

Search-based test and improvement of machine-learning-based anomaly detection systems

Maxime Cordy, Steve Muller, Mike Papadakis, Yves Le Traon
2019 Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis - ISSTA 2019  
By co-evolving our attack and defence mechanisms we succeeded at improving the defence of the IDS under test by making it resilient to 49 out of 50 independently generated attacks.  ...  We evaluate our approach on a denial-of-service attack detection scenario and a dataset recording the network traffic of a real-world system.  ...  We also propose the inverse technique, i.e., a search technique that searches for countermeasures (defence strategies) to counter given attacks.  ... 
doi:10.1145/3293882.3330580 dblp:conf/issta/CordyMPT19 fatcat:tlbufms4l5amjls7bcuze33af4
« Previous Showing results 1 — 15 out of 866 results