Learning from Noisy Labels with No Change to the Training Process

Mingyuan Zhang, Jane Lee, Shivani Agarwal
2021 International Conference on Machine Learning  
There has been much interest in recent years in developing learning algorithms that can learn accurate classifiers from data with noisy labels. A widely-studied noise model is that of classconditional noise (CCN), wherein a label y is flipped to a label y with some associated noise probability that depends on both y and y. In the multiclass setting, all previously proposed algorithms under the CCN model involve changing the training process, by introducing a 'noisecorrection' to the surrogate
more » ... ss to be minimized over the noisy training examples. In this paper, we show that this is really unnecessary: one can simply perform class probability estimation (CPE) on the noisy examples, e.g. using a standard (multiclass) logistic regression algorithm, and then apply noise-correction only in the final prediction step. This means that the training algorithm itself does not need any change, and one can simply use standard off-the-shelf implementations with no modification to the code for training. Our approach can handle general multiclass loss matrices, including the usual 0-1 loss but also other losses such as those used for ordinal regression problems. We also provide a quantitative regret transfer bound, which bounds the target regret on the true distribution in terms of the CPE regret on the noisy distribution; in doing so, we extend the notion of strong properness introduced for binary losses by Agarwal (2014) to the multiclass case. Our bound suggests that the sample complexity of learning under CCN increases as the noise matrix approaches singularity. We also provide fixes and potential improvements for noise estimation methods that involve computing anchor points. Our experiments confirm our theoretical findings.
dblp:conf/icml/ZhangL021 fatcat:nma4wppejvfcfphxbhmjj57kra