A critical analysis of metrics used for measuring progress in artificial intelligence [article]

Kathrin Blagec, Georg Dorffner, Milad Moradi, Matthias Samwald
2021 arXiv   pre-print
Comparing model performances on benchmark datasets is an integral part of measuring and driving progress in artificial intelligence. A model's performance on a benchmark dataset is commonly assessed based on a single or a small set of performance metrics. While this enables quick comparisons, it may entail the risk of inadequately reflecting model performance if the metric does not sufficiently cover all performance characteristics. It is unknown to what extent this might impact benchmarking
more » ... orts. To address this question, we analysed the current landscape of performance metrics based on data covering 3867 machine learning model performance results from the open repository 'Papers with Code'. Our results suggest that the large majority of metrics currently used have properties that may result in an inadequate reflection of a models' performance. While alternative metrics that address problematic properties have been proposed, they are currently rarely used. Furthermore, we describe ambiguities in reported metrics, which may lead to difficulties in interpreting and comparing model performances.
arXiv:2008.02577v2 fatcat:u6phhkiwnvclhaksgwqudbpkw4