A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks
[article]
2021
arXiv
pre-print
To train robust deep neural networks (DNNs), we systematically study several target modification approaches, which include output regularisation, self and non-self label correction (LC). ...
Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation ...
To improve Self LC, we propose a novel method named Progressive Self Label Correction (ProSelfLC), which is end-to-end trainable and needs negligible extra cost. ...
arXiv:2005.03788v6
fatcat:hwo7trw4fjgeld4cwoeu5hu5mq
Mutual Distillation of Confident Knowledge
[article]
2022
arXiv
pre-print
For example, CMD-P obtains new state-of-the-art results in robustness against label noise. ...
However, not all knowledge is certain and correct, especially under adverse conditions. For example, label noise usually leads to less reliable models due to undesired memorization . ...
LABEL NOISE METHODS IN THE LITERATURE We compare with classical and latest label correction methods, including CE, LS, CP, Bootsoft, and ProSelfLC. ...
arXiv:2106.01489v2
fatcat:wmdxzb4eznfjpejixlam26gnri