A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
[article]
2019
arXiv
pre-print
Recent results suggest that attacks against supervised machine learning systems are quite effective, while defenses are easily bypassed by new attacks. However, the specifications for machine learning systems currently lack precise adversary definitions, and the existing attacks make diverse, potentially unrealistic assumptions about the strength of the adversary who launches them. We propose the FAIL attacker model, which describes the adversary's knowledge and control along four dimensions.
arXiv:1803.06975v2
fatcat:gmoywj2vffcgfgdz6e2n6cpuei