A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Evaluating Defensive Distillation For Defending Text Processing Neural Networks Against Adversarial Examples
[article]
2019
arXiv
pre-print
Adversarial examples are artificially modified input samples which lead to misclassifications, while not being detectable by humans. These adversarial examples are a challenge for many tasks such as image and text classification, especially as research shows that many adversarial examples are transferable between different classifiers. In this work, we evaluate the performance of a popular defensive strategy for adversarial examples called defensive distillation, which can be successful in
arXiv:1908.07899v1
fatcat:mw6cofoclfepvncghbosi3uog4