Auroral Classification Ergonomics and the Implications for Machine Learning [post]

Derek McKay, Andreas Kvammen
2020 unpublished
<p><strong>Abstract.</strong> The machine learning research community has focused greatly on bias in algorithms and have identified different manifestations of it. Bias in the training samples is recognised as a potential source of prejudice in machine learning. It can be introduced by human experts who define the training sets. As machine learning techniques are being applied to auroral classification, it is important to identify and address potential sources of
more » ... d bias. In an ongoing study, 13 947 auroral images were manually classified with significant differences between classifications. This large data set allowed identification of some of these biases, especially those originating as a result of the ergonomics of the classification process. These findings are presented in this paper, to serve as a checklist for improving training data integrity, not just for expert classifications, but also for crowd-sourced, citizen science projects. As the application of machine learning techniques to auroral research is relatively new, it is important that biases are identified and addressed before they become endemic in the corpus of training data.</p>
doi:10.5194/gi-2019-41 fatcat:lpcu2jnomrglrgmet33em6lzty