On the Difficulty of Evaluating Baselines: A Study on Recommender Systems [article]

Steffen Rendle, Li Zhang, Yehuda Koren
2019 arXiv   pre-print
Numerical evaluations with comparisons to baselines play a central role when judging research in recommender systems. In this paper, we show that running baselines properly is difficult. We demonstrate this issue on two extensively studied datasets. First, we show that results for baselines that have been used in numerous publications over the past five years for the Movielens 10M benchmark are suboptimal. With a careful setup of a vanilla matrix factorization baseline, we are not only able to
more » ... mprove upon the reported results for this baseline but even outperform the reported results of any newly proposed method. Secondly, we recap the tremendous effort that was required by the community to obtain high quality results for simple methods on the Netflix Prize. Our results indicate that empirical findings in research papers are questionable unless they were obtained on standardized benchmarks where baselines have been tuned extensively by the research community.
arXiv:1905.01395v1 fatcat:4qka2twxuzgzfpu6wqae2ka6ay