14,265 Hits in 4.6 sec

Revisiting offline evaluation for implicit-feedback recommender systems

Olivier Jeunen
2019 Proceedings of the 13th ACM Conference on Recommender Systems - RecSys '19  
Recommender systems are typically evaluated in an offline setting.  ...  A subset of the available user-item interactions is sampled to serve as test set, and some model trained on the remaining data points is then evaluated on its performance to predict which interactions  ...  CONCLUSION In this paper, we have presented the key differences between onand offline evaluation methodologies for implicit feedback recommender systems.  ... 
doi:10.1145/3298689.3347069 dblp:conf/recsys/Jeunen19 fatcat:tlm64i2mbza6hequt4xyrhl4zu

Content-based Neighbor Models for Cold Start in Recommender Systems

Maksims Volkovs, Guang Wei Yu, Tomi Poutanen
2017 Proceedings of the Recommender Systems Challenge 2017 on ZZZ - RecSys Challenge '17  
Unlike other competitions, here the participating teams were evaluated in two phases -offline and online. Models were first evaluated on the held-out offline test set.  ...  Top models were then A/B tested in the online phase where new target users and items were released daily and recommendations were pushed into XING's live production system.  ...  Each team received daily batches of target users and items, and was asked to recommend relevant target items to target users with the constraints that (1) maximum one item can be recommended to each user  ... 
doi:10.1145/3124791.3124792 fatcat:kbr3n2ojhrgvdngz7uhlloakl4

Model-Based Reinforcement Learning with Adversarial Training for Online Recommendation [article]

Xueying Bai, Jian Guan, Hongning Wang
2020 arXiv   pre-print
Reinforcement learning is well suited for optimizing policies of recommender systems.  ...  Offline evaluation methods, such as importance sampling, can alleviate such limitations, but usually request a large amount of logged data and do not work well when the action space is large.  ...  Offline evaluation The problems of off-policy learning [22, 25, 26] and offline policy evaluation are generally pervasive and challenging in RL, and in recommender systems in particular.  ... 
arXiv:1911.03845v3 fatcat:qgonaucopfavnms4cud34j6gry

Study of a bias in the offline evaluation of a recommendation algorithm [article]

Arnaud De Myttenaere , Bénédicte Le Grand
2015 arXiv   pre-print
It thus influences the way users interact with the system and, as a consequence, bias the evaluation of the performance of a recommendation algorithm computed using historical data (via offline evaluation  ...  This paper describes this bias and discuss the relevance of a weighted offline evaluation to reduce this bias for different classes of recommendation algorithms.  ...  Here we will describe the impact of previous recommendation campaigns on the offline evaluation score and compute the score of offline evaluation by stochastic sampling on the sample data extracted from  ... 
arXiv:1511.01280v1 fatcat:s4j3pggganfibk4ninktcssubu

Modeling Online Behavior in Recommender Systems: The Importance of Temporal Context [article]

Milena Filipovic, Blagoj Mitrevski, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu Musat
2021 arXiv   pre-print
Recommender systems research tends to evaluate model performance offline and on randomly sampled targets, yet the same systems are later used to predict user behavior sequentially from a fixed point in  ...  Simulating online recommender system performance is notoriously difficult and the discrepancy between online and offline behaviors is typically not accounted for in offline evaluations.  ...  Inadequate evaluation techniques can lead to false confidence, which is especially detrimental in commercial settings. Recommender system evaluation can be done online or offline.  ... 
arXiv:2009.08978v3 fatcat:k5vnxo2bzzdprecculsrnr5dri

Diversity-Promoting Deep Reinforcement Learning for Interactive Recommendation [article]

Yong Liu, Yinan Zhang, Qiong Wu, Chunyan Miao, Lizhen Cui, Binqiang Zhao, Yin Zhao, Lu Guan
2019 arXiv   pre-print
Most previous interactive recommendation systems only focus on optimizing recommendation accuracy while overlooking other important aspects of recommendation quality, such as the diversity of recommendation  ...  Interactive recommendation that models the explicit interactions between users and the recommender system has attracted a lot of research attentions in recent years.  ...  In the offline setting, the user's feedback on the recommendation results is fixed, which limits the effectiveness of interactive recommender systems in capturing users' dynamic preferences on items.  ... 
arXiv:1903.07826v1 fatcat:s5nlfafmvjhmlct5gar2qkcrc4

Compressive Features in Offline Reinforcement Learning for Recommender Systems [article]

Hung Nguyen, Minh Nguyen, Long Pham, Jennifer Adorno Nieves
2021 arXiv   pre-print
In this paper, we develop a recommender system for a game that suggests potential items to players based on their interactive behaviors to maximize revenue for the game provider.  ...  Our Q-learning-based system is then trained from the processed offline data set.  ...  Evaluation metric One key challenge in training offline reinforcement learning models is that the models often overestimate the reward values when calculating samples that are under-represented in the  ... 
arXiv:2111.08817v1 fatcat:mf4es4u4e5fsvf72nzfx4ch3vi

Recommendations as Treatments

Thorsten Joachims, Ben London, Yi Su, Adith Swaminathan, Lequn Wang
2021 The AI Magazine  
This article explains how these techniques enable unbiased offline evaluation and learning despite biased data, and how they can inform considerations of fairness and equity in recommender systems.  ...  In recent years, a new line of research has taken an interventional view of recommender systems, where recommendations are viewed as actions that the system takes to have a desired effect.  ...  One such area is offline A/B testing, which is also known as off-policy evaluation in the literature.  ... 
doi:10.1609/aimag.v42i3.18141 fatcat:hdyi4nadijgp3fpieqojib5pfq

The Simpson's Paradox in the Offline Evaluation of Recommendation Systems [article]

Amir H. Jadidinejad, Craig Macdonald, Iadh Ounis
2021 arXiv   pre-print
Our in-depth experiments based on stratified sampling reveal that a very small minority of items that are frequently exposed by the deployed system plays a confounding factor in the offline evaluation  ...  Using the relative comparison of many recommendation models as in the typical offline evaluation of recommender systems, and based on the Kendall rank correlation coefficient, we show that our proposed  ...  loop feedback in the offline evaluation of recommendation systems.  ... 
arXiv:2104.08912v1 fatcat:fto33uml6bfsnmbypdzne77beq

Reducing Offline Evaluation Bias in Recommendation Systems [article]

Arnaud De Myttenaere, Boris Golden
2014 arXiv   pre-print
This adaptation process influences the way users interact with the system and, as a consequence, increases the difficulty of evaluating a recommendation algorithm with historical data (via offline evaluation  ...  This paper analyses this evaluation bias and proposes a simple item weighting solution that reduces its impact.  ...  This bias in offline evaluation with online systems can also be caused by other events such as a promotional offer on some specific products between a first offline evaluation and a second one.  ... 
arXiv:1407.0822v1 fatcat:vjrof7qe4jaufa5bml4rrfl5jq

A Survey on Reinforcement Learning for Recommender Systems [article]

Yuanguo Lin, Yong Liu, Fan Lin, Pengcheng Wu, Wenhua Zeng, Chunyan Miao
2021 arXiv   pre-print
Nevertheless, there are various challenges of RL when applying in recommender systems.  ...  Recommender systems have been widely applied in different real-life scenarios to help us find useful information.  ...  Sampling Efficiency: Sampling plays a substantial role in RL, especially in RLbased recommender systems.  ... 
arXiv:2109.10665v1 fatcat:whrqgxcb4fa53omquvpy6nitjm

LHRM: A LBS based Heterogeneous Relations Model for User Cold Start Recommendation in Online Travel Platform [article]

Ziyi Wang, Wendong Xiao, Yu Li, Zulong Chen, Zhi Jiang
2021 arXiv   pre-print
Experimental results on real data from Fliggy's offline log illustrate the effectiveness of LHRM.  ...  Most current recommender systems used the historical behaviour data of user to predict user' preference. However, it is difficult to recommend items to new users accurately.  ...  To evaluate the proposed approach, we collect the offline log data from Fliggy and Taobao domain in the past one month as the dataset.  ... 
arXiv:2108.02344v1 fatcat:7vij6psta5ggxkwv26hmydlfca

Learning Item-Interaction Embeddings for User Recommendations [article]

Xiaoting Zhao, Raphael Louca, Diane Hu, Liangjie Hong
2018 arXiv   pre-print
Because of its computational efficiency, our model lends itself naturally as a candidate set selection method, and we evaluate it as such in an industry-scale recommendation system that serves live traffic  ...  Consequently, a user's recommendations should be based not only on the item from their past activity, but also the way in which they interacted with that item.  ...  We evaluate our model as a user-specific candidate set selection method in an end-to-end production recommendation system that serves live traffic on  ... 
arXiv:1812.04407v1 fatcat:vqfqm3iwwza7lghjgan2wbztre

Estimating Error and Bias in Offline Evaluation Results

Mucun Tian, Michael D. Ekstrand
2020 Proceedings of the 2020 Conference on Human Information Interaction and Retrieval  
Offline evaluations of recommender systems attempt to estimate users' satisfaction with recommendations using static data from prior user interactions.  ...  However, offline evaluation cannot accurately assess novel, relevant recommendations, because the most novel items were previously unknown to the user, so they are missing from the historical data and  ...  CONCLUSIONS AND FUTURE WORK We have simulated user preference for items and resulting consumption observations in order to estimate error and bias in the results of offline evaluations of recommender systems  ... 
doi:10.1145/3343413.3378004 dblp:conf/chiir/TianE20 fatcat:dofm7765ircrzbk5tjyunrr2q4

A Hybrid Recommendation Method Based on Feature for Offline Book Personalization [article]

Xixi Li and Jiahao Xing and Haihui Wang and Lingfang Zheng and Suling Jia and Qiang Wang
2018 arXiv   pre-print
Recommendation system has been widely used in different areas. Collaborative filtering focuses on rating, ignoring the features of items itself.  ...  The experiment shows that our hybrid recommendation method based on features performances better than single recommendation method in offline book retail data.  ...  Evaluation of recommendation system Experimental metric Precision is a metric that represents the probability that an item recommended as relevant is truly relevant.  ... 
arXiv:1804.11335v1 fatcat:tfkykpw63nhtdchshtotlgzu4m
« Previous Showing results 1 — 15 out of 14,265 results