A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
[article]
2021
arXiv
pre-print
Specifically, we show that crowdsourcing is vulnerable to data poisoning attacks, in which malicious clients provide carefully crafted data to corrupt the aggregated data. ...
Our empirical results show that the proposed defenses can substantially reduce the estimation errors of the data poisoning attacks. ...
Defense Evaluation
CONCLUSION In this paper, we performed a systematic study on data poisoning attacks and defenses to crowdsourcing systems. ...
arXiv:2102.09171v2
fatcat:pedlm7664bbapc34u62xfmqnaq
A Survey of Attacks Against Twitter Spam Detectors in an Adversarial Environment
2019
Robotics
Thus, this paper surveys the attacks against Twitter spam detectors in an adversarial environment, and a general taxonomy of potential adversarial attacks is presented using common frameworks from the ...
Machine learning (ML) techniques have been widely used as a tool to address many cybersecurity application problems (such as spam and malware detection). ...
The aim of the first poisoning attack is to mislead the system by using crowdturfing admins to inject misleading samples directly into the training data. ...
doi:10.3390/robotics8030050
fatcat:weuiiwblrbfbvg26ycwbyxpk4i
Attack under Disguise
2018
Proceedings of the 2018 World Wide Web Conference on World Wide Web - WWW '18
, and thus can tolerate the data poisoning attacks to some degree. ...
In this paper, we study the data poisoning attacks against such crowdsourcing systems with the Dawid-Skene model empowered. ...
This work was supported in part by the US National Science Foundation under grants CNS-1566374, CNS-1652503, IIS-1553411 and CNS-1742845. ...
doi:10.1145/3178876.3186032
dblp:conf/www/MiaoLSHJG18
fatcat:dt46xeqzezefnangbudjvdanwa
Crowdsourcing Under Data Poisoning Attacks: A Comparative Study
[chapter]
2020
Lecture Notes in Computer Science
In addition to the variable quality of the contributed data, a potential challenge presented to crowdsourcing applications is data poisoning attacks where malicious users may intentionally and strategically ...
In this paper, we propose a comprehensive data poisoning attack taxonomy for truth inference in crowdsourcing and systematically evaluate the state-of-the-art truth inference methods under various data ...
Data Poisoning Attacks Due to their open nature, crowdsourcing systems are subject to data poisoning attacks [23, 40] where malicious workers intentionally and strategically report incorrect labels to ...
doi:10.1007/978-3-030-49669-2_18
fatcat:xnytdkglavfh5f2ixm7euz2wwy
Label Sanitization against Label Flipping Poisoning Attacks
[article]
2018
arXiv
pre-print
In this paper we propose an efficient algorithm to perform optimal label flipping poisoning attacks and a mechanism to detect and relabel suspicious data points, mitigating the effect of such poisoning ...
Many machine learning systems rely on data collected in the wild from untrusted sources, exposing the learning algorithms to data poisoning. ...
Note that typical scenarios of poisoning happen when retraining the machine learning system using data collected in the wild, but small fractions of data points can be curated before the system is deployed ...
arXiv:1803.00992v2
fatcat:3q73mdoplfeodgosd2lu4czhkq
RobustFed: A Truth Inference Approach for Robust Federated Learning
[article]
2021
arXiv
pre-print
Experimental results show that our solution ensures robust federated learning and is resilient to various types of attacks, including noisy data attacks, Byzantine attacks, and label flipping attacks. ...
However, the aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior. ...
[8] proposed a defense method, called FoolsGold, against data poisoning attack in FL in a non-IID setting. ...
arXiv:2107.08402v1
fatcat:h2ij5v66grgrzaexzzmzobmo4u
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
[article]
2021
arXiv
pre-print
In poisoning attacks, attackers deliberately influence the training data to manipulate the results of a predictive model. ...
We provide formal guarantees about its convergence and an upper bound on the effect of poisoning attacks when the defense is deployed. ...
ACKNOWLEDGEMENTS We thank Ambra Demontis for confirming the attack results on ridge regression, and Tina Eliassi-Rad, Jonathan Ullman, and Huy Le Nguyen for discussing poisoning attacks. ...
arXiv:1804.00308v3
fatcat:qnytuepmybcydalfauifru2wc4
Holistic Adversarial Robustness of Deep Learning Models
[article]
2022
arXiv
pre-print
This paper provides a comprehensive overview of research topics and foundational principles of research methods for adversarial robustness of deep learning models, including attacks, defenses, verification ...
Adversarial robustness studies the worst-case performance of a machine learning model to ensure safety and reliability. ...
adv is similar to x but ŷθ (x adv ) = t which can be realized through noisy data collection such as crowdsourcing. ...
arXiv:2202.07201v1
fatcat:q2ush5pqyjgu7nxragxrp6k7re
Classification Auto-Encoder based Detector against Diverse Data Poisoning Attacks
[article]
2022
arXiv
pre-print
Poisoning attacks are a category of adversarial machine learning threats in which an adversary attempts to subvert the outcome of the machine learning systems by injecting crafted data into training data ...
We show that an enhanced version of CAE (called CAE+) does not have to employ a clean data set to train the defense model. ...
In crowdsourcing platforms, the attacker can cause massive damages without having a direct access to the system, but rather by poisoning the collected data from her. ...
arXiv:2108.04206v2
fatcat:6ny7gzwznbhzxb5bpvvil6p3cu
Widen The Backdoor To Let More Attackers In
[article]
2021
arXiv
pre-print
In backdoor attacks, where an adversary attempts to poison a model by introducing malicious samples into the training data, adversaries have to consider that the presence of additional backdoor attackers ...
We then exploit this phenomenon to minimize the collective ASR of attackers and maximize defender's robustness accuracy by (i) artificially augmenting the number of attackers, and (ii) indexing to remove ...
We hold the poison rate to be 0.2 unless otherwise specified; this applies to the default poison rate by each attacker, and the poison rate used in our agent augmentation defense. ...
arXiv:2110.04571v1
fatcat:ft624grz7jhitmid467v2tq3eq
Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning
[article]
2022
arXiv
pre-print
This assumption is challenged by the threat of poisoning, an attack that manipulates the training data to compromise the model's performance at test time. ...
Although poisoning has been acknowledged as a relevant threat in industry applications, and a variety of different attacks and defenses have been proposed so far, a complete systematization and critical ...
MODELING POISONING ATTACKS AND DEFENSES We discuss here how to categorize poisoning attacks against learning-based systems. ...
arXiv:2205.01992v1
fatcat:634zayldxfgfrlucascahjesxm
TruthTrust: Truth Inference-Based Trust Management Mechanism on a Crowdsourcing Platform
2021
Sensors
To defend such collusion attacks in crowdsourcing platforms, we propose a new defense model named TruthTrust. ...
Defending against malicious attacks is an important issue in crowdsourcing, which has been extensively addressed by existing methods, e.g., verification-based defense mechanisms, data analysis solutions ...
Data Availability Statement: The datasets generated during the current study are available from the corresponding authors on reasonable request. ...
doi:10.3390/s21082578
pmid:33916964
fatcat:lcm2xhkgufgbdb3mm3djdkmbzi
Certified Defenses for Data Poisoning Attacks
[article]
2017
arXiv
pre-print
Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. ...
Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error ...
We are grateful to Daniel Selsam, Zhenghao Chen, and Nike Sun, as well as to the anonymous reviewers, for a great deal of helpful feedback. ...
arXiv:1706.03691v2
fatcat:je7sdqmgcrcpjmimp2vco7rmgi
Making machine learning trustworthy
2021
Science
Poisoning happens when models learn from crowdsourced data or from inputs they receive while in operation, both of which are susceptible to tampering. ...
Lack of transferability of notions of attacks, defenses, and metrics from one domain to another is also a pressing issue that impedes progress toward trustworthy ML. ...
doi:10.1126/science.abi5052
fatcat:qjnee5ile5ftbbdgwvkh65dima
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
[article]
2021
arXiv
pre-print
This method can identify and exclude poisoning samples crafted to insert backdoor into the model from training data without a verified and trusted dataset. ...
LSTM-based text classification by data poisoning. ...
Collecting training data is not an easy job, so people sometimes have to use crowdsourced data, public datasets or data shared with third-party. ...
arXiv:2007.12070v3
fatcat:4yjoxlskmjdvlc6yfkdr52fpfe
« Previous
Showing results 1 — 15 out of 281 results