Filters








15,940 Hits in 6.1 sec

CROC: A New Evaluation Criterion for Recommender Systems

Andrew I. Schein, Alexandrin Popescul, Lyle H. Ungar, David M. Pennock
2005 Electronic Commerce Research  
Evaluation of a recommender system algorithm is a challenging task due to the many possible scenarios in which such systems may be deployed.  ...  Our CROC curve supplements the widely used ROC curve in recommender system evaluation by discovering performance characteristics that standard ROC evaluation often ignores.  ...  We would like to thank the reviewers for their many valuable comments.  ... 
doi:10.1023/b:elec.0000045973.51289.8c fatcat:3oltlakerrdrte4vyb7l6aoyna

Lessons Learned Addressing Dataset Bias in Model-Based Candidate Generation at Twitter [article]

Alim Virani, Jay Baxter, Dan Shiebler, Philip Gautier, Shivam Verma, Yan Xia, Apoorv Sharma, Sumit Binnani, Linlin Chen, Chenguang Yu
2021 arXiv   pre-print
Traditionally, heuristic methods are used to generate candidates for large scale recommender systems.  ...  Popular techniques to correct dataset bias, such as inverse propensity scoring, do not work well in the context of candidate generation.  ...  Acknowledgements We would like everyone at Twitter that collaborated with us in the completion of this work.  ... 
arXiv:2105.09293v1 fatcat:nkny2ih3xzcdjctwaygvofb2pm

Understanding and Mitigating Multi-Sided Exposure Bias in Recommender Systems [article]

Masoud Mansoury
2021 arXiv   pre-print
For evaluation, I introduce several metrics for measuring the exposure fairness for items and suppliers, and show that these metrics better capture the fairness properties in the recommendation results  ...  I propose a rating transformation technique that works as a pre-processing step before building the recommendation model to alleviate the inherent popularity bias in the input data and consequently to  ...  I have learnt enormously about the field of recommender systems during our weekly group meetings.  ... 
arXiv:2111.05564v1 fatcat:x6sjjidhzrfzbmibqts6ri7ufy

Popularity Bias in Recommendation: A Multi-stakeholder Perspective [article]

Himan Abdollahpouri
2020 arXiv   pre-print
In this dissertation, I study the impact of popularity bias in recommender systems from a multi-stakeholder perspective.  ...  Prior research has examined various approaches for mitigating popularity bias and enhancing the recommendation of long-tail items overall.  ...  All algorithms are evaluated using different evaluation metrics described in Chapter 4 and two state-of-the-art baselines for popularity bias mitigation are also included for comparison.  ... 
arXiv:2008.08551v1 fatcat:yiuamp6lcnc2bmhzkklakq6ui4

Do Metrics Make Recommender Algorithms?

Elica Campochiaro, Riccardo Casatta, Paolo Cremonesi, Roberto Turrin
2009 2009 International Conference on Advanced Information Networking and Applications Workshops  
In the last years, in the area of recommender systems, the Netflix contest has been very attractive for the researchers.  ...  However, many recent papers on recommender systems present results evaluated with the methodology used in the Netflix contest, also in domains where the objectives are different from the contest (e.g.,  ...  The second bias is due to the item popularity (e.g., items positively rated by many users).  ... 
doi:10.1109/waina.2009.127 dblp:conf/aina/CampochiaroCCT09 fatcat:rn7f3425dzccdjsldu2gl4nwc4

The Connection Between Popularity Bias, Calibration, and Fairness in Recommendation [article]

Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
2020 arXiv   pre-print
In particular, we conjecture that popularity bias which is a well-known phenomenon in recommendation is one important factor leading to miscalibration in recommendation.  ...  In this paper, we use a metric called miscalibration for measuring how a recommendation algorithm is responsive to users' true preferences and we consider how various algorithms may result in different  ...  In the next sections, we will define a metric for measuring the degree to which popularity bias is propagated / amplified by the recommendation algorithm.  ... 
arXiv:2008.09273v1 fatcat:lug6opaqyvfizp3sfqpel4fmiy

Addressing the Multistakeholder Impact of Popularity Bias in Recommendation Through Calibration [article]

Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
2020 arXiv   pre-print
In addition, we demonstrate that existing evaluation metrics for popularity bias do not reflect the performance of the algorithms when it is measured from the perspective of different stakeholders.  ...  Popularity bias is a well-known phenomenon in recommender systems: popular items are recommended even more frequently than their popularity would warrant, amplifying long-tail effects already present in  ...  . • We show that some of the existing metrics in the literature to evaluate popularity bias mitigation often hide important information about how a certain algorithm controls popularity bias for different  ... 
arXiv:2007.12230v1 fatcat:sgzqa6om4ranxirpm4xf3fvgjq

User-centric evaluation of a K-furthest neighbor collaborative filtering recommender algorithm

Alan Said, Ben Fields, Brijnesh J. Jain, Sahin Albayrak
2013 Proceedings of the 2013 conference on Computer supported cooperative work - CSCW '13  
A standard k-nearest neighbor recommender is used as a baseline in both evaluation settings.  ...  Collaborative filtering recommender systems often use nearest neighbor methods to identify candidate items.  ...  Michael Meder from TU Berlin for their help with conceptualization, implementation and evaluation of the user study.  ... 
doi:10.1145/2441776.2441933 dblp:conf/cscw/SaidFJA13 fatcat:z5bftgxp7vdetdb63m2myzfcui

On Sampled Metrics for Item Recommendation

Walid Krichene, Steffen Rendle
2020 Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining  
Item recommendation algorithms are evaluated using ranking metrics that depend on the positions of relevant items.  ...  This paper investigates sampled metrics in more detail and shows that they are inconsistent with their exact version, in the sense that they do not persist relative statements, e.g., recommender A is better  ...  ACKNOWLEDGEMENTS We would like to thank Nicolas Mayoraz and Li Zhang for their helpful comments and suggestions.  ... 
doi:10.1145/3394486.3403226 fatcat:ib3iveavlzdcxjkijdwpf5uere

Selecting Appropriate Metrics for Evaluation of Recommender Systems

Bhupesh Rawat, Sanjay K. Dwivedi
2019 International Journal of Information Technology and Computer Science  
Moreover, in order for a recommender system to generate good quality of recommendations, it is essential for a researcher to find the most suitable evaluation metric which best matches a given recommender  ...  However, with the availability of several recommender tasks, recommender algorithms, and evaluation metrics, it is often difficult for a researcher to find their best combination.  ...  In the following subsection, we provide an overview of the most popular predictive evaluation metrics that have been reported in the literature on the recommender system.  ... 
doi:10.5815/ijitcs.2019.01.02 fatcat:df2ar6nsj5dh7an7bdvlj4wed4

New Metrics for Effective Detection of Shilling Attacks in Recommender Systems

T. Srikanth, Associate Professor, Department of CSE, GITAM University Visakhapatnam, Andhra Pradesh, India, M. Shashi
2019 International Journal of Information Engineering and Electronic Business  
However, such systems are shown to be at risk of attacks. Malicious users can deliberately insert biased profiles in favor/disfavor of chosen item(s).  ...  Collaborative filtering techniques are successfully employed in recommender systems to assist users counter the information overload by making accurate personalized recommendations.  ...  RELATED CONCEPTS AND EXISTING LITERATURE This section describes about the popular attacks in Recommender System. Finally, popular shilling attack detection methods are reviewed. A.  ... 
doi:10.5815/ijieeb.2019.04.04 fatcat:aayqgqx365cxhfjofxjlkstzta

Facets of Fairness in Search and Recommendation [article]

Sahil Verma, Ruoyuan Gao, Chirag Shah
2020 arXiv   pre-print
Several recent works have highlighted how search and recommender systems exhibit bias along different dimensions.  ...  In doing so, this paper presents comparisons and highlights contracts among various measures, and gaps in our conceptual and evaluative frameworks.  ...  Rank Equality: Rank equality has its origins in the metric called equalized odds, which requires equal classification error rates (false positive and false negative error) across the protected and unprotected  ... 
arXiv:2008.01194v1 fatcat:vtmdj65nvndbbavaot6nyii2qi

Session-aware Recommendation: A Surprising Quest for the State-of-the-art [article]

Sara Latifi, Noemi Mauro, Dietmar Jannach
2020 arXiv   pre-print
Recommender systems are designed to help users in situations of information overload.  ...  In recent years, we observed increased interest in session-based recommendation scenarios, where the problem is to make item suggestions to users based only on interactions observed in an ongoing session  ...  Acknowledgements We are grateful to Zhongli Filippo Hu for his contributions to the integration of the algorithms. We thank Malte Ludewig for his help and support during this research.  ... 
arXiv:2011.03424v1 fatcat:tl6lorex6fdhdn73rq3dw46jdm

Do Offline Metrics Predict Online Performance in Recommender Systems? [article]

Karl Krauth, Sarah Dean, Alex Zhao, Wenshuo Guo, Mihaela Curmei, Benjamin Recht, Michael I. Jordan
2020 arXiv   pre-print
In this work we investigate the extent to which offline metrics predict online performance by evaluating eleven recommenders across six controlled simulated environments.  ...  Recommender systems operate in an inherently dynamical setting. Past recommendations influence future behavior, including which data points are observed and how user preferences change.  ...  From well-known effects like popularity bias in item recommendations to contested phenomena like polarization and radicalization among users; myopic optimization of offline metrics can cause unintended  ... 
arXiv:2011.07931v1 fatcat:fre2cuepjzcv5gtnk3ulnblywu

The Importance of Understanding False Discoveries and the Accuracy Paradox When Evaluating Quantitative Studies

Kirk Davis, Rodney Maiden
2021 Studies in Social Science Research  
for a test's usefulness is not always the best metric.  ...  Given the prevalence of publication bias and small effect sizes in the literature, the possibility of a false discovery is especially important to consider.  ...  This suggests that a reliance on accuracy as a metric for a test's usefulness is not always the best metric. Instead, false discoveries and false omissions should be considered.  ... 
doi:10.22158/sssr.v2n2p1 fatcat:xebh5xn6xzfq5nhpktuib7k4n4
« Previous Showing results 1 — 15 out of 15,940 results