A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance
[article]
2021
arXiv
pre-print
We first provide requirements for authentication and provenance for a secure machine learning system. ...
In this work, we take a different approach to preventing data poisoning attacks which relies on cryptographically-based authentication and provenance to ensure the integrity of the data used to train a ...
Software poisoning attacks are another threat vector for machine learning systems, and VAMP also prevents software poisoning attacks. ...
arXiv:2105.10051v1
fatcat:z3aofniukbal5egy6p4nf5btje
Evaluating Deep Learning Models and Adversarial Attacks on Accelerometer-Based Gesture Authentication
[article]
2021
arXiv
pre-print
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples, and we show that our deep learning model is surprisingly robust to such an attack scenario. ...
In this research, we collect tri-axial accelerometer gesture data (TAGD) from 46 users and perform classification experiments with both classical machine learning and deep learning models. ...
We test both poisoning and evasion attacks using learning models to generative adversarial samples; specifically, we use a type of generative adversarial network (GAN) to produce adversarial samples. ...
arXiv:2110.14597v1
fatcat:srlx7exlmjbrdirqecdkcaxweq
AI for Beyond 5G Networks: A Cyber-Security Defense or Offense Enabler?
[article]
2022
arXiv
pre-print
patterns from a large set of time-varying multi-dimensional data, and deliver faster and accurate decisions. ...
pointing out their limitations and adoption challenges. ...
Adversarial Machine Learning Adversarial Machine Learning (AML) [16] aims at improving the robustness of ML techniques to adversarial attacks by assessing their vulnerabilities and devising appropriate ...
arXiv:2201.02730v1
fatcat:upuk2pjcfzag5bjs5woeiwkxe4
Security of Distributed Intelligence in Edge Computing: Threats and Countermeasures
[chapter]
2020
The Cloud-to-Thing Continuum
Furthermore, the recent trend of incorporating intelligence in edge computing systems has led to its own security issues such as data and model poisoning, and evasion attacks. ...
However, due to the issues of resource-constrained hardware and software heterogeneities, most edge computing systems are prone to a large variety of attacks. ...
Defenses against Data Poisoning In a data poisoning attack on a machine learning system, the adversary injects malicious samples into the training pool. ...
doi:10.1007/978-3-030-41110-7_6
fatcat:dikjnfcbhre3rnaumiwx2gxptm
Reaching Data Confidentiality and Model Accountability on the CalTrain
[article]
2018
arXiv
pre-print
However, this approach to achieving data confidentiality makes today's DCL designs fundamentally vulnerable to data poisoning and backdoor attacks. ...
Distributed collaborative learning (DCL) paradigms enable building joint machine learning models from distrusting multi-party participants. ...
are used for authenticating and decrypting the training data. ...
arXiv:1812.03230v1
fatcat:boywhcunwfcybj6ze2dapfhbgq
Generating Comprehensive Data with Protocol Fuzzing for Applying Deep Learning to Detect Network Attacks
[article]
2020
arXiv
pre-print
Our findings show that fuzzing generates data samples that cover real-world data and deep learning models trained with fuzzed data can successfully detect real network attacks. ...
Network attacks have become a major security concern for organizations worldwide and have also drawn attention in the academics. ...
used for training traditional machine learning models. ...
arXiv:2012.12743v1
fatcat:clj3hgd6lvazhhrwswzqltgu7i
Spoofing detection on adaptive authentication System‐A survey
2021
IET Biometrics
With the widespread of computing and mobile devices, authentication using biometrics has received greater attention. ...
However, their adaptability to changes may be exploited by an attacker to compromise the stored templates, either to impersonate a specific client or to deny access to him/her. ...
Nowadays, deep learning models have an edge over traditional machine learning models. Using deep learning, the model can learn complex patterns from a given input. ...
doi:10.1049/bme2.12060
fatcat:rjaupc3cpzcg5nkipaqun65gjy
Cybersecurity in Intelligent Transportation Systems
2020
Computers
The main focus of security approaches is: configuration and initialization of the devices during manufacturing at perception layer; anonymous authentication of nodes in VANET at network layer; defense ...
of fog-based structures at support layer and description and standardization of the complex model of data and metadata and defense of systems, based on AI at application layer. ...
Machine Learning Machine learning (ML) is the subset of AI that is most widely used in cybersecurity systems. ...
doi:10.3390/computers9040083
fatcat:lct3ng6lk5frnfd2htxhlraqle
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning
[article]
2022
arXiv
pre-print
learning (ML) and artificial intelligence (AI) that can be processed on on distributed UEs. ...
We explore and analyze the potential of threats for each information exchange level based on an overview of the current state-of-the-art attack mechanisms, and then discuss the possible defense methods ...
Cryptology is proven useful in a large number of authentication and access control scenarios, but it cannot address the problem of fully new participant. ...
arXiv:2202.09027v2
fatcat:hlu7bopcjrc6zjn2pct57utufy
Machine Learning Security: Threats, Countermeasures, and Evaluations
2020
IEEE Access
INDEX TERMS Artificial intelligence security, poisoning attacks, backdoor attacks, adversarial examples, privacy-preserving machine learning. ...
First, the machine learning model in the presence of adversaries is presented, and the reasons why machine learning can be attacked are analyzed. ...
This method can prevent wrong updates and prevent the poisoning data from reducing the performance of the distributed SVM. ...
doi:10.1109/access.2020.2987435
fatcat:ksinvcvcdvavxkzyn7fmsa27ji
Secure and Provenance Enhanced Internet of Health Things Framework: A Blockchain Managed Federated Learning Approach
2020
IEEE Access
Secure aggregation of models allows us to prevent a poisoning attack, in which a malicious FL node might introduce a backdoor poisonous model that could add bias to the training data and tilt or "poison ...
Provenance Using Blockchain Blockchain has gained trust in providing provenance, data integrity, authentication, and immutability for the IoHT [84] . ...
doi:10.1109/access.2020.3037474
fatcat:6il44yktnjd4pak2y2rzlnvjdm
Technologies for Trustworthy Machine Learning: A Survey in a Socio-Technical Context
[article]
2022
arXiv
pre-print
To achieve the objectives of these frameworks, the data and software engineers who build machine-learning systems require knowledge about a variety of relevant supporting tools and techniques. ...
We conclude with an identification of open research problems, with a particular focus on the connection between trustworthy machine learning technologies and their implications for individuals and society ...
[128] propose a solution protect SVM classifier from poisoning attacks. Finally, some proposals focus on feature reduction algorithms to prevent poisoning attacks. Rubinstein et al. ...
arXiv:2007.08911v3
fatcat:gmswdvel6bdbvg5rvyzb2uygbu
A Survey on Cybersecurity Challenges, Detection, and Mitigation Techniques for the Smart Grid
2021
Energies
and vulnerabilities at three different levels. ...
The overall smart grid network is comprised of customers accessing the network, communication network of the smart devices and sensors, and the people managing the network (decision makers); all three ...
Another research showed the limitations of voltage-overscaling (VOS)-based authentication, as it can be exploited using machine learning models (ML) [74] . ...
doi:10.3390/en14185894
fatcat:pk6hxwyvuffpxa3gmip2dgxgbe
A Survey on Resilient Machine Learning
[article]
2017
arXiv
pre-print
Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion ...
However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (eg, training data collection, training, operation). ...
attacks and make machine learning algorithms more robust against these attacks. ...
arXiv:1707.03184v1
fatcat:qjylw7bvkzbdlbrof5cfpy2jyq
Machine Learning – The Results Are Not the only Thing that Matters! What About Security, Explainability and Fairness?
[chapter]
2020
Lecture Notes in Computer Science
Recent advances in machine learning (ML) and the surge in computational power have opened the way to the proliferation of ML and Artificial Intelligence (AI) in many domains and applications. ...
The aspects that can hinder practical and trustful ML and AI are: lack of security of ML algorithms as well as lack of fairness and explainability. ...
This work is funded under the SPARTA project, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 830892. ...
doi:10.1007/978-3-030-50423-6_46
fatcat:4r2tbrsc7zh2xc3svsp2wtnds4
« Previous
Showing results 1 — 15 out of 3,234 results