A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
[article]
2018
arXiv
pre-print
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications. However, deep neural networks with the softmax classifier are known to produce highly overconfident posterior distributions even for such abnormal samples. In this paper, we propose a simple yet effective method for detecting any abnormal samples, which is applicable to any
arXiv:1807.03888v2
fatcat:kkgl5zrfdfhztk6hajgpvmhr5q