Improved Robustness to Open Set Inputs via Tempered Mixup [article]

Ryne Roady, Tyler L. Hayes, Christopher Kanan
2020 arXiv   pre-print
Supervised classification methods often assume that evaluation data is drawn from the same distribution as training data and that all classes are present for training. However, real-world classifiers must handle inputs that are far from the training distribution including samples from unknown classes. Open set robustness refers to the ability to properly label samples from previously unseen categories as novel and avoid high-confidence, incorrect predictions. Existing approaches have focused on
more » ... either novel inference methods, unique training architectures, or supplementing the training data with additional background samples. Here, we propose a simple regularization technique easily applied to existing convolutional neural network architectures that improves open set robustness without a background dataset. Our method achieves state-of-the-art results on open set classification baselines and easily scales to large-scale open set classification problems.
arXiv:2009.04659v1 fatcat:bvvztda72nbpbgxkeivtxfjiu4