2 Hits in 2.1 sec

ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks [article]

Xinshao Wang, Yang Hua, Elyor Kodirov, David A. Clifton, Neil M. Robertson
2021 arXiv   pre-print
To train robust deep neural networks (DNNs), we systematically study several target modification approaches, which include output regularisation, self and non-self label correction (LC).  ...  Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation  ...  To improve Self LC, we propose a novel method named Progressive Self Label Correction (ProSelfLC), which is end-to-end trainable and needs negligible extra cost.  ... 
arXiv:2005.03788v6 fatcat:hwo7trw4fjgeld4cwoeu5hu5mq

Mutual Distillation of Confident Knowledge [article]

Ziyun Li, Xinshao Wang, Di Hu, Neil M. Robertson, David A. Clifton, Christoph Meinel, Haojin Yang
2022 arXiv   pre-print
For example, CMD-P obtains new state-of-the-art results in robustness against label noise.  ...  However, not all knowledge is certain and correct, especially under adverse conditions. For example, label noise usually leads to less reliable models due to undesired memorization .  ...  LABEL NOISE METHODS IN THE LITERATURE We compare with classical and latest label correction methods, including CE, LS, CP, Bootsoft, and ProSelfLC.  ... 
arXiv:2106.01489v2 fatcat:wmdxzb4eznfjpejixlam26gnri