735 Hits in 3.3 sec

Diversified Interactive Recommendation with Implicit Feedback

Yong Liu, Yingtai Xiao, Qiong Wu, Chunyan Miao, Juyong Zhang, Binqiang Zhao, Haihong Tang
In this paper, we propose a novel diversified recommendation model, named Diversified Contextual Combinatorial Bandit (DC2B), for interactive recommendation with users' implicit feedback.  ...  Interactive recommender systems that enable the interactions between users and the recommender system have attracted increasing research attention.  ...  Conclusion and Future Work This work proposes a novel bandit learning method, namely Diversified Contextual Combinatorial Bandit (DC 2 B), for interactive recommendation based on users' implicit feedback  ... 
doi:10.1609/aaai.v34i04.5931 fatcat:pd6lrcjshjautl2wvprmw5bsqa

Recent Advances in Diversified Recommendation [article]

Qiong Wu, Yong Liu, Chunyan Miao, Yin Zhao, Lu Guan, Haihong Tang
2019 arXiv   pre-print
With the rapid development of recommender systems, accuracy is no longer the only golden criterion for evaluating whether the recommendation results are satisfying or not.  ...  In this paper, we are going to review the recent advances in diversified recommendation.  ...  In this section, we review two lines of interactive methods for diversified recommendation, i.e., contextual bandit and deep reinforcement learning.  ... 
arXiv:1905.06589v1 fatcat:yzzea2ozkre67bt646ic4vij6u

A Hybrid Bandit Framework for Diversified Recommendation [article]

Qinxu Ding, Yong Liu, Chunyan Miao, Fei Cheng, Haihong Tang
2020 arXiv   pre-print
To overcome this problem, we propose the Linear Modular Dispersion Bandit (LMDB) framework, which is an online learning setting for optimizing a combination of modular functions and dispersion functions  ...  Previous interactive recommendation methods primarily focus on learning users' personalized preferences on the relevance properties of an item set.  ...  In summary, the main contributions made in this work are as follows: (1) we propose a novel bandit learning framework, namely Linear Modular Dispersion Bandit (LMDB), for diversified interactive recommendation  ... 
arXiv:2012.13245v1 fatcat:ktcq37fypjhejagbxibje6lsre

Diversity-Promoting Deep Reinforcement Learning for Interactive Recommendation [article]

Yong Liu, Yinan Zhang, Qiong Wu, Chunyan Miao, Lizhen Cui, Binqiang Zhao, Yin Zhao, Lu Guan
2019 arXiv   pre-print
In this paper, we propose a novel recommendation model, named Diversity-promoting Deep Reinforcement Learning (D^2RL), which encourages the diversity of recommendation results in interaction recommendations  ...  Interactive recommendation that models the explicit interactions between users and the recommender system has attracted a lot of research attentions in recent years.  ...  Differing from existing interactive recommendation methods, this paper proposes a novel recommendation model, namely Diversity-promoting Deep Reinforcement Learning (D 2 RL), for diversified interactive  ... 
arXiv:1903.07826v1 fatcat:s5nlfafmvjhmlct5gar2qkcrc4

Unbiased Cascade Bandits: Mitigating Exposure Bias in Online Learning to Rank Recommendation [article]

Masoud Mansoury, Himan Abdollahpouri, Bamshad Mobasher, Mykola Pechenizkiy, Robin Burke, Milad Sabouri
2021 arXiv   pre-print
This phenomenon can be viewed as a recommendation feedback loop: the system repeatedly recommends certain items at different time points and interactions of users with those items will amplify bias towards  ...  We analyze these algorithms on their ability to handle exposure bias and provide a fair representation for items and suppliers in the recommendation results.  ...  Cascading bandit models In a cascading bandit [14, 19, 33] , the learning agent interacts with users by delivering the recommendations to them and receiving feedback.  ... 
arXiv:2108.03440v1 fatcat:mwhsy5ddrjgn5mjk2e24lisbkm

Cluster Based Deep Contextual Reinforcement Learning for top-k Recommendations [article]

Anubha Kabra, Anu Agarwal, Anil Singh Parihar
2020 arXiv   pre-print
To sufficiently cater to this need, we propose a novel method for generating top-k recommendations by creating an ensemble of clustering with reinforcement learning.  ...  Rapid advancements in the E-commerce sector over the last few decades have led to an imminent need for personalised, efficient and dynamic recommendation systems.  ...  This paper focuses on exploration using dueling bandit gradient descent for reinforcement learning based recommendation strategy.  ... 
arXiv:2012.02291v1 fatcat:ak7qbf6ngve3hlz6ozuvz3wq4y

Show Me the Whole World: Towards Entire Item Space Exploration for Interactive Personalized Recommendations [article]

Yu Song, Jianxun Lian, Shuai Sun, Hong Huang, Yu Li, Hai Jin, Xing Xie
2021 arXiv   pre-print
User interest exploration is an important and challenging topic in recommender systems, which alleviates the closed-loop effects between recommendation models and user-item interactions.  ...  Contextual bandit (CB) algorithms strive to make a good trade-off between exploration and exploitation so that users' potential interests have chances to expose.  ...  At the core of bandit algorithms is to find an optimal trade-off between exploitation (to recommend fully based on user profiles learned from user interaction history) and exploration (find out the new  ... 
arXiv:2110.09905v1 fatcat:af2jsd5r2nhvtc7hmwbdbxwbim

Spoiled for Choice? Personalized Recommendation for Healthcare Decisions: A Multi-Armed Bandit Approach [article]

Tongxin Zhou, Yingfei Wang, Lu Yan, Yong Tan
2020 arXiv   pre-print
Taking into account that users' health behaviors can be highly dynamic and diverse, we propose a multi-armed bandit (MAB)-driven recommendation framework, which enables us to adaptively learn users' preference  ...  The second component is a diversity constraint, which structurally diversifies recommendations in different dimensions to provide users with well-rounded support.  ...  Healthcare Recommendation: Deep Learning and Multi-Armed Bandit We propose a deep-learning and diversity-enhanced MAB framework for recommending healthcare interventions to address the challenges and research  ... 
arXiv:2009.06108v1 fatcat:rs7utaw2sfa45dajb6mnzm4qcu

Online Diverse Learning to Rank from Partial-Click Feedback [article]

Prakhar Gupta, Gaurush Hiranandani, Harvineet Singh, Branislav Kveton, Zheng Wen, Iftikhar Ahamath Burhanuddin
2018 arXiv   pre-print
We propose an online learning algorithm, cascadelsb, for solving our problem.  ...  Learning to rank is an important problem in machine learning and recommender systems. In a recommender system, a user is typically recommended a list of items.  ...  , an online learning framework for learning to rank in the diverse cascade model. • We propose CascadeLSB, a computationally-efficient algorithm for learning to rank in the diverse cascading bandit.  ... 
arXiv:1811.00911v2 fatcat:ssfyfen6svh7pah5jiyzmwpvlu

D2RLIR : an improved and diversified ranking function in interactive recommendation systems based on deep reinforcement learning [article]

Vahid Baghi, Seyed Mohammad Seyed Motehayeri, Ali Moeini, Rooholah Abedian
2021 arXiv   pre-print
Recently, interactive recommendation systems based on reinforcement learning have been attended by researchers due to the consider recommendation procedure as a dynamic process and update the recommendation  ...  This paper proposes a deep reinforcement learning based recommendation system by utilizing Actor-Critic architecture to model dynamic users' interaction with the recommender agent and maximize the expected  ...  Some works [1] - [5] formulated the interactive recommendation procedure as a multi-armed bandit problem.  ... 
arXiv:2110.15089v2 fatcat:t353qiznn5exdfevdjwvs2rb3a

Context Uncertainty in Contextual Bandits with Applications to Recommender Systems [article]

Hao Wang, Yifei Ma, Hao Ding, Yuyang Wang
2022 arXiv   pre-print
Recurrent neural networks have proven effective in modeling sequential user feedbacks for recommender systems.  ...  Our theoretical analysis shows that REN can preserve the rate-optimal sublinear regret even when there exists uncertainty in the learned representations.  ...  Acknowledgement The authors thank Tim Januschowski, Alex Smola, the AWS AI's Personalize Team and ML Forecast Team, as well as the reviewers/SPC/AC for the constructive comments to improve the paper.  ... 
arXiv:2202.00805v3 fatcat:q7zrgiizvjef7f73gux2srfjp4

A Contextual-Bandit Approach to Online Learning to Rank for Relevance and Diversity [article]

Chang Li, Haoyun Feng, Maarten de Rijke
2019 arXiv   pre-print
We propose a hybrid contextual bandit approach, called CascadeHybrid, for solving this problem.  ...  It is a core area in modern interactive systems, such as search engines, recommender systems, or conversational assistants.  ...  The learning agent interacts with CB and learns from the feedback.  ... 
arXiv:1912.00508v2 fatcat:xxkgm4ao2vgs5dge7tsczdiqly

Modeling and Counteracting Exposure Bias in Recommender Systems [article]

Sami Khenissi, Olfa Nasraoui
2020 arXiv   pre-print
Then we model the exposure that is borne from the interaction between the user and the recommender system and propose new debiasing strategies for these systems.  ...  Our research findings show the importance of understanding the nature of and dealing with bias in machine learning models such as recommender systems that interact directly with humans, and are thus causing  ...  For instance, a user who keeps seeing the same type of recommendation through several iterations can lose interest and stop interacting with these recommendations.  ... 
arXiv:2001.04832v1 fatcat:4tcc56lcrjflhjbibjknxljdv4

Collaborative Filtering Bandits

Shuai Li, Alexandros Karatzoglou, Claudio Gentile
2016 Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval - SIGIR '16  
In this work, we investigate an adaptive clustering technique for content recommendation based on exploration-exploitation strategies in contextual multi-armed bandit settings.  ...  Classical collaborative filtering, and content-based filtering methods try to learn a static recommendation model given training data.  ...  Also, the first and the third author acknowledge the support from Amazon AWS Award in Machine Learning Research Grant.  ... 
doi:10.1145/2911451.2911548 dblp:conf/sigir/LiKG16 fatcat:3m3aussumjco3i7qerbkxs5fxm

Supporting Complex Information-Seeking Tasks with Implicit Constraints [article]

Ali Ahmadvand, Negar Arabzadeh, Julia Kiseleva, Patricio Figueroa Sanz, Xin Deng, Sujay Jauhar, Michael Gamon, Eugene Agichtein, Ned Friend, Aniruddha
2022 arXiv   pre-print
Such systems are inherently helpful for day-today user tasks requiring planning that are usually time-consuming, sometimes tricky, and cognitively taxing.  ...  We have designed and deployed a platform to collect the data from approaching such complex interactive systems.  ...  Parapar and Radlinski [71] proposed a multi-armed bandit model for personalized recommendations by diversifying the user preferences.  ... 
arXiv:2205.00584v1 fatcat:waapsu6kjfgolbvhny36kqffsa
« Previous Showing results 1 — 15 out of 735 results