A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Adding Robustness to Support Vector Machines Against Adversarial Reverse Engineering
2014
Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management - CIKM '14
Many classification algorithms have been successfully deployed in security-sensitive applications including spam filters and intrusion detection systems. Under such adversarial environments, adversaries can generate exploratory attacks against the defender such as evasion and reverse engineering. In this paper, we discuss why reverse engineering attacks can be carried out quite efficiently against fixed classifiers, and investigate the use of randomization as a suitable strategy for mitigating
doi:10.1145/2661829.2662047
dblp:conf/cikm/AlabdulmohsinGZ14
fatcat:qpqg6oddvrdh5hxdrvjl66c67m