Revisiting offline evaluation for implicit-feedback recommender systems

Olivier Jeunen
2019 Proceedings of the 13th ACM Conference on Recommender Systems - RecSys '19  
Recommender systems are typically evaluated in an offline setting. A subset of the available user-item interactions is sampled to serve as test set, and some model trained on the remaining data points is then evaluated on its performance to predict which interactions were left out. Alternatively, in an online evaluation setting, multiple versions of the system are deployed and various metrics for those systems are recorded. Systems that score better on these metrics, are then typically
more » ... . Online evaluation is effective, but inefficient for a number of reasons. Offline evaluation is much more efficient, but current methodologies often fail to accurately predict online performance. In this work, we identify three ways to improve and extend current work on offline evaluation methodologies. More specifically, we believe there is much room for improvement in temporal evaluation, off-policy evaluation, and moving beyond using just clicks to evaluate performance. CCS CONCEPTS • Information systems → Recommender systems; Evaluation of retrieval results.
doi:10.1145/3298689.3347069 dblp:conf/recsys/Jeunen19 fatcat:tlm64i2mbza6hequt4xyrhl4zu