Filters








8,701 Hits in 3.8 sec

Defending Distributed Classifiers Against Data Poisoning Attacks [article]

Sandamal Weerasinghe, Tansu Alpcan, Sarah M. Erfani, Christopher Leckie
2020 arXiv   pre-print
Support Vector Machines (SVMs) are vulnerable to targeted training data manipulations such as poisoning attacks and label flips.  ...  We introduce a weighted SVM against such attacks using K-LID as a distinguishing characteristic that de-emphasizes the effect of suspicious data samples on the SVM decision boundary.  ...  To build an SVM classifier resilient against adversarial attacks, we require the K-LID estimates of benign samples and attacked samples to have distinguishing distributions.  ... 
arXiv:2008.09284v1 fatcat:2ohufvjpwbbtnfvr2puaoqvaa4

Label Sanitization against Label Flipping Poisoning Attacks [article]

Andrea Paudice, Luis Muñoz-González, Emil C. Lupu
2018 arXiv   pre-print
Label flipping attacks are a special case of data poisoning, where the attacker can control the labels assigned to a fraction of the training points.  ...  In this paper we propose an efficient algorithm to perform optimal label flipping poisoning attacks and a mechanism to detect and relabel suspicious data points, mitigating the effect of such poisoning  ...  Although the defensive algorithm is capable of successfully mitigating the effect of optimal poisoning attacks, its performance is limited to defend against label flipping attacks.  ... 
arXiv:1803.00992v2 fatcat:3q73mdoplfeodgosd2lu4czhkq

Defending against poisoning attacks in online learning settings

Greg Collinge, Emil C. Lupu, Luis Muñoz-González
2019 The European Symposium on Artificial Neural Networks  
In this paper we first formulate an optimal attack strategy against online learning classifiers to assess worst-case scenarios.  ...  We also propose two defence mechanisms to mitigate the effect of online poisoning attacks by analysing the impact of the data points in the classifier and by means of an adaptive combination of machine  ...  Adaptive Combination of Machine Learning Models Following a different approach, we also propose to defend against online poisoning attacks with a convex combination of two classifiers with different learning  ... 
dblp:conf/esann/CollingeLM19 fatcat:nkaeknnixvg6dltwbodjyg4zyu

Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges [article]

Jinyuan Jia, Neil Zhenqiang Gong
2019 arXiv   pre-print
To defend against inference attacks, we can add carefully crafted noise into the public data to turn them into adversarial examples, such that attackers' classifiers make incorrect predictions for the  ...  In this chapter, we discuss the opportunities and challenges of defending against ML-equipped inference attacks via adversarial examples.  ...  Defending against Inference Attacks via Adversarial Examples  ... 
arXiv:1909.08526v2 fatcat:6vcswmo5hzbw3fgveamzjcubre

A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers [article]

Xi Li, David J. Miller, Zhen Xiang, George Kesidis
2022 arXiv   pre-print
Data Poisoning (DP) is an effective attack that causes trained classifiers to misclassify their inputs.  ...  ; 3) jointly identifies poisoned components and samples by minimizing the BIC cost defined over the whole training set, with the identified poisoned data removed prior to classifier training.  ...  Related Work An obvious strategy for defending against data poisoning attacks is to conduct "data sanitization" on the training set, i.e., identifying and cleansing the attack samples as training set outliers  ... 
arXiv:2105.13530v2 fatcat:f4e4o3q4gzdq3bm2hg45c6ibeq

Randomizing SVM Against Adversarial Attacks Under Uncertainty [chapter]

Yan Chen, Wei Wang, Xiangliang Zhang
2018 Lecture Notes in Computer Science  
In this paper, we propose randomized SVMs against generalized adversarial attacks under uncertainty, through learning a classifier distribution rather than a single classifier in traditional robust SVMs  ...  The experimental results demonstrate the effectiveness of our proposed models on defending against various attacks, including aggressive attacks with uncertainty.  ...  To protect the classification system, instead of learning a fixed classifier, the defender uses training data to infer a distribution of classifiers.  ... 
doi:10.1007/978-3-319-93040-4_44 fatcat:2dmkgtk4rndubjpjmbumhj5dtm

D6.3 Security of Federated Machine Learning Algorithms

Luis Muñoz-González, Muhammad Zaid Hameed, Alexander Matyasko, Emil Lupu, Ambrish Rawat, Giulio Zizzo, Mark Purcell
2021 Zenodo  
(poisoning attacks) and test time (evasion attacks).  ...  This includes a report with a comprehensive evaluation of the robustness of the different algorithms developed in the MUSKETEER Machine Learning Library (MMLL) against different attacks both at training  ...  PIMA is a binary classification task, so the label flipping attacker simply flips the labels of the training data points.  ... 
doi:10.5281/zenodo.5841977 fatcat:vv2nbzgm6bhfvm3dznad7vmaiq

Boundary augment: A data augment method to defend poison attack

Xuan Chen, YueNa Ma, ShiWei Lu, Yu Yao
2021 IET Image Processing  
In this paper, a novel method to defend against poison attacks by estimating the distribution of poison data and retraining the backdoor model with a few training data is introduced.  ...  Also, it is proven that the adversarial training approach is a practical approach to defend against the poison attack.  ...  They encouraged to use L2 regularization to defend against poison attacks.  ... 
doi:10.1049/ipr2.12325 fatcat:3q65g5ddxnbg3der4zrkhg6k7e

Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning [article]

Christopher Frederickson, Michael Moore, Glenn Dawson, Robi Polikar
2018 arXiv   pre-print
A number of approaches have been developed that can render a machine learning algorithm ineffective through poisoning or other types of attacks.  ...  We then propose an equally simple yet elegant solution by adding a regularization term to the attacker's objective function that penalizes outlying attack points.  ...  With each random initialization, 50 instances are sampled from the Gaussian distribution and the attack point pictured appended to poison the dataset.  ... 
arXiv:1802.07295v1 fatcat:bbpmylp3z5e3jdcaq67zsdnemi

Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges [article]

Bowei Xi
2021 arXiv   pre-print
against machine learning techniques -- poisoning attacks, evasion attacks, and privacy attacks.  ...  While adversarial samples in cybersecurity often have different properties/distributions compared with training data, adversarial images in computer vision are created with minor input perturbations.  ...  Here we discuss all the existing approaches proposed to defend against poisoning attacks.  ... 
arXiv:2107.02894v1 fatcat:ir7vzxh3wfaddcmgezqtyxu7iy

Local Model Poisoning Attacks to Byzantine-Robust Federated Learning [article]

Minghong Fang, Xiaoyu Cao, Jinyuan Jia, Neil Zhenqiang Gong
2021 arXiv   pre-print
We generalize two defenses for data poisoning attacks to defend against our local model poisoning attacks.  ...  Our evaluation results show that one defense can effectively defend against our attacks in some cases, but the defenses are not effective enough in other cases, highlighting the need for new defenses against  ...  against data poisoning attacks, to defend against our local model poisoning attacks.  ... 
arXiv:1911.11815v4 fatcat:z6myiywcprccznhlhlgkmvhiji

Exploring Adversarial Attacks and Defences for Fake Twitter Account Detection

Panagiotis Kantartopoulos, Nikolaos Pitropakis, Alexios Mylonas, Nicolas Kylilis
2020 Technologies  
vulnerable to adversarial attacks.  ...  Moreover, we propose and evaluate the use of k-NN as a countermeasure to remedy the effects of the adversarial attacks that we have implemented.  ...  P.K. performed the data preparation and assisted N.P. with writing, A.M. and N.K. reviewed and edited the manuscript. All authors have read and agreed to the published version of the manuscript.  ... 
doi:10.3390/technologies8040064 fatcat:sn3c4k3e2jemnfjdzxrf53aoy4

Data Poisoning Won't Save You From Facial Recognition [article]

Evani Radiya-Dixit, Sanghyun Hong, Nicholas Carlini, Florian Tramèr
2022 arXiv   pre-print
Data poisoning has been proposed as a compelling defense against facial recognition models trained on Web-scraped pictures.  ...  We caution that facial recognition poisoning will not admit an "arms race" between attackers and defenders.  ...  In each round i ≥ 1, the attacker sends new poisoned data to the defender. The defender may train on all the training data (X adv , Y) it collected over prior rounds.  ... 
arXiv:2106.14851v2 fatcat:bgor6b6tnnewvhpzbwjuu5hhba

Neural Trojans [article]

Yuntao Liu, Yang Xie, Ankur Srivastava
2017 arXiv   pre-print
We show that this is an effective attack and provide three mitigation techniques: input anomaly detection, re-training, and input preprocessing. All the techniques are proven effective.  ...  As we do not know about the training process, there can be security threats in the neural IP: the IP vendor (attacker) may embed hidden malicious functionality, i.e. neural Trojans, into the neural IP.  ...  The attacks against machine learning algorithms can be broadly classified by when the attack takes place: during training (poisoning attack) or after deployment (exploratory attack). B.  ... 
arXiv:1710.00942v1 fatcat:5h7rnyd7vvb6daisn3e4jhhrdm

Subpopulation Data Poisoning Attacks [article]

Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea
2021 arXiv   pre-print
Compared to existing backdoor poisoning attacks, subpopulation attacks have the advantage of inducing misclassification in naturally distributed data points at inference time, making the attacks extremely  ...  Poisoning attacks against machine learning induce adversarial modification of data used by a machine learning algorithm to selectively change its output when it is deployed.  ...  Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. We also thank Toyota ITC for funding this research.  ... 
arXiv:2006.14026v3 fatcat:4ispwapnizcj5fkpxio4e4knjq
« Previous Showing results 1 — 15 out of 8,701 results