Provable tradeoffs in adversarially robust classification [article]

Edgar Dobriban, Hamed Hassani, David Hong, Alexander Robey
2022 arXiv   pre-print
It is well known that machine learning methods can be vulnerable to adversarially-chosen perturbations of their inputs. Despite significant progress in the area, foundational open problems remain. In this paper, we address several key questions. We derive exact and approximate Bayes-optimal robust classifiers for the important setting of two- and three-class Gaussian classification problems with arbitrary imbalance, for ℓ_2 and ℓ_∞ adversaries. In contrast to classical Bayes-optimal
more » ... determining the optimal decisions here cannot be made pointwise and new theoretical approaches are needed. We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry, which, to our knowledge, have not yet been used in the area. Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced. We also show further results, including an analysis of classification calibration for convex losses in certain models, and finite sample rates for the robust risk.
arXiv:2006.05161v5 fatcat:ic3unrv27vaerfofok4d4jnpbi