A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
n-ML: Mitigating Adversarial Examples via Ensembles of Topologically Manipulated Classifiers
[article]
2019
arXiv
pre-print
This paper proposes a new defense called n-ML against adversarial examples, i.e., inputs crafted by perturbing benign inputs by small amounts to induce misclassifications by classifiers. Inspired by n-version programming, n-ML trains an ensemble of n classifiers, and inputs are classified by a vote of the classifiers in the ensemble. Unlike prior such approaches, however, the classifiers in the ensemble are trained specifically to classify adversarial examples differently, rendering it very
arXiv:1912.09059v1
fatcat:yekadehvobajbideoaq27ugh7u