A General Framework for Abstention Under Label Shift [article]

Amr M. Alexandari, Anshul Kundaje, Avanti Shrikumar
2022 arXiv   pre-print
In safety-critical applications of machine learning, it is often important to abstain from making predictions on low confidence examples. Standard abstention methods tend to be focused on optimizing top-k accuracy, but in many applications, accuracy is not the metric of interest. Further, label shift (a shift in class proportions between training time and prediction time) is ubiquitous in practical settings, and existing abstention methods do not handle label shift well. In this work, we
more » ... a general framework for abstention that can be applied to optimize any metric of interest, that is adaptable to label shift at test time, and that works out-of-the-box with any classifier that can be calibrated. Our approach leverages recent reports that calibrated probability estimates can be used as a proxy for the true class labels, thereby allowing us to estimate the change in an arbitrary metric if an example were abstained on. We present computationally efficient algorithms under our framework to optimize sensitivity at a target specificity, auROC, and the weighted Cohen's Kappa, and introduce a novel strong baseline based on JS divergence from prior class probabilities. Experiments on synthetic, biological, and clinical data support our findings.
arXiv:1802.07024v5 fatcat:i7dx6yhyjnfojll3lg6lihnz3y