A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
[article]
2021
arXiv
pre-print
As machine learning becomes widely used for automated decisions, attackers have strong incentives to manipulate the results and models generated by machine learning algorithms. In this paper, we perform the first systematic study of poisoning attacks and their countermeasures for linear regression models. In poisoning attacks, attackers deliberately influence the training data to manipulate the results of a predictive model. We propose a theoretically-grounded optimization framework
arXiv:1804.00308v3
fatcat:qnytuepmybcydalfauifru2wc4