Filters








1,063 Hits in 3.9 sec

Alternating Linear Bandits for Online Matrix-Factorization Recommendation [article]

Hamid Dadkhahi, Sahand Negahban
2018 arXiv   pre-print
In this paper, we propose a novel algorithm for online matrix factorization recommendation that combines linear bandits and alternating least squares.  ...  We consider the problem of online collaborative filtering in the online setting, where items are recommended to the users over time.  ...  Alternating Linear Bandits for Online Matrix-Factorization Recommendation  ... 
arXiv:1810.09401v1 fatcat:h4mm75e4sngrjikkgxg33y4f5a

Bandits Under The Influence (Extended Version) [article]

Silviu Maniu, Stratis Ioannidis, Bogdan Cautis
2020 arXiv   pre-print
We present online recommendation algorithms rooted in the linear multi-armed bandit literature.  ...  Recommender systems should adapt to user interests as the latter evolve. A prevalent cause for the evolution of user interests is the influence of their social circle.  ...  LinUCB Algorithm LinUCB [21] is an alternative to LinREL for linear bandits. Unfortunately, in our case, it leads to an intractable problem.  ... 
arXiv:2009.10135v1 fatcat:s5df3dif3neqlfrbl43lirau3m

Fast Distributed Bandits for Online Recommendation Systems [article]

Kanak Mahadik, Qingyun Wu, Shuai Li, Amit Sabne
2020 arXiv   pre-print
Contextual bandit algorithms are commonly used in recommender systems, where content popularity can change rapidly.  ...  Recent recommendation algorithms that learn clustering or social structures between users have exhibited higher recommendation accuracy.  ...  RL-based online recommendation algorithms: Reinforcement learning (RL) techniques are an alternative to bandit-based recommendation systems.  ... 
arXiv:2007.08061v1 fatcat:6zlznh2cjbe5xo76ulcycgh5y4

Bandits Warm-up Cold Recommender Systems [article]

Jérémie Mary, Romaric Gaudel, Preux Philippe (INRIA Lille - Nord Europe, LIFL)
2014 arXiv   pre-print
Overall, the goal of this paper is to bridge the gap between recommender systems based on matrix factorizations and those based on contextual bandits.  ...  Then, we propose an online setting closer to the actual use of recommender systems; this setting is inspired by the bandit framework.  ...  We also introduce a methodology to use a classical partially filled rating matrix to assess the online performance of a bandit-based recommendation algorithm.  ... 
arXiv:1407.2806v1 fatcat:fkomqud3avgzjeqkp3p6ywydae

Bandits and Recommender Systems [chapter]

Jérémie Mary, Romaric Gaudel, Philippe Preux
2015 Lecture Notes in Computer Science  
Overall, the goal of this paper is to bridge the gap between recommender systems based on matrix factorizations and those based on contextual bandits.  ...  Then, we propose an online setting closer to the actual use of recommender systems; this setting is inspired by the bandit framework.  ...  We also introduce a methodology to use a classical partially filled rating matrix to assess the online performance of a bandit-based recommendation algorithm.  ... 
doi:10.1007/978-3-319-27926-8_29 fatcat:627el3ubinblvkjbd4rcczcdpq

Deep neural network marketplace recommenders in online experiments [article]

Simen Eide, Ning Zhou
2018 arXiv   pre-print
This paper focuses on the challenge of measuring recommender performance and summarizes the online experiment results with several promising types of deep neural network recommenders - hybrid item representation  ...  models combining features from user engagement and content, sequence-based models, and multi-armed bandit models that optimize user engagement by re-ranking proposals from multiple submodels.  ...  Similar to (i), we factorize a user-postcode matrix D t ≥tt h by a specific date tt h .  ... 
arXiv:1809.02130v1 fatcat:hbe62jvfhvcp5i3bfvmaoulgoa

Contextual Combinatorial Bandit and its Application on Diversified Online Recommendation [chapter]

Lijing Qin, Shouyuan Chen, Xiaoyan Zhu
2014 Proceedings of the 2014 SIAM International Conference on Data Mining  
For example, most traditional techniques are based on similarity or overlap among existing data, however, there may not exist sufficient historical records for some new users to predict their preference  ...  Recommender systems are faced with new challenges that are beyond traditional techniques.  ...  ratings for 3900 movies by 6040 users of online movie recommendation service [19] .  ... 
doi:10.1137/1.9781611973440.53 dblp:conf/sdm/QinCZ14 fatcat:v36nne7k4vbyxihof7mqprdipi

A Linear Bandit for Seasonal Environments [article]

Giuseppe Di Benedetto, Vito Bellini, Giovanni Zappella
2020 arXiv   pre-print
Contextual bandit algorithms are extremely popular and widely used in recommendation systems to provide online personalised recommendations.  ...  In the music recommendation scenario for instance, people's music taste can abruptly change during certain events, such as Halloween or Christmas, and revert to the previous music taste soon after.  ...  INTRODUCTION Bandit algorithms are extremely popular and widely used in recommender systems, due to their ability to efficiently deal with the exploration-exploitation trade-off in an online fashion.  ... 
arXiv:2004.13576v1 fatcat:k4cf436vfbaijgo2ofza2glyrq

Contextual Bandits for adapting to changing User preferences over time [article]

Dattaraj Rao
2020 arXiv   pre-print
Next we develop a novel algorithm for solving the contextual bandit problem.  ...  Similar to the linear bandits, this algorithm maps the reward as a function of context vector but uses an array of learners to capture variation between actions/arms.  ...  We will compare the 2 and create a matrix of accuracy.  ... 
arXiv:2009.10073v2 fatcat:dxg4vd6xvbeklmtpgqumynex5e

Bayesian Linear Bandits for Large-Scale Recommender Systems [article]

Saeed Ghoorchian, Setareh Maghsudi
2022 arXiv   pre-print
We develop a decision-making policy for a linear bandit problem with high-dimensional context vectors and several arms.  ...  That is especially challenging when the decision-maker has a variety of items to recommend. In this paper, we build upon the linear contextual multi-armed bandit framework to address this problem.  ...  Index Terms-Recommender systems, decision-making, online learning, multi-armed bandit I.  ... 
arXiv:2202.03167v1 fatcat:owzxaod45bbh3fvutxlghmkgse

The Missing Piece in Complex Analytics: Low Latency, Scalable Model Management and Serving with Velox [article]

Daniel Crankshaw, Peter Bailis, Joseph E. Gonzalez, Haoyuan Li, Zhao Zhang, Michael J. Franklin, Ali Ghodsi, Michael I. Jordan
2014 arXiv   pre-print
To provide up-to-date results for these complex models, Velox also facilitates lightweight online model maintenance and selection (i.e., dynamic weighting).  ...  Velox is a data management system for facilitating the next steps in real-world, large-scale analytics pipelines: online model management, maintenance, and serving.  ...  Hellerstein, Tomer Kaftan, Henry Milner, Ion Stoica, Vikram Sreekanti, and the anonymous CIDR reviewers for their thoughtful feedback on this work.  ... 
arXiv:1409.3809v2 fatcat:33a5muyjlbhmrjxn2zkwxlxpxq

Leveraging Post Hoc Context for Faster Learning in Bandit Settings with Applications in Robot-Assisted Feeding [article]

Ethan K. Gordon, Sumegh Roychowdhury, Tapomayukh Bhattacharjee, Kevin Jamieson, Siddhartha S. Srinivasa
2021 arXiv   pre-print
Previous work showed that the problem can be represented as a linear bandit with visual context.  ...  In general, we propose a modified linear contextual bandit framework augmented with post hoc context observed after action selection to empirically increase learning speed and reduce cumulative regret.  ...  If the post hoc context model is known perfectly, it can recommend the correct action for a given context after only a single attempt, cutting down exploration by a factor of K.  ... 
arXiv:2011.02604v2 fatcat:35kyenmzwvc5vmznmfxocbmeu4

Stochastic Low-rank Tensor Bandits for Multi-dimensional Online Decision Making [article]

Jie Zhou, Botao Hao, Zheng Wen, Jingfei Zhang, Will Wei Sun
2022 arXiv   pre-print
Multi-dimensional online decision making plays a crucial role in many real applications such as online recommendation and digital marketing.  ...  We propose two learning algorithms tensor elimination and tensor epoch-greedy for tensor bandits without context, and derive finite-time regret bounds for them.  ...  ., 2011] for linear bandits with finitely many arms.  ... 
arXiv:2007.15788v2 fatcat:ie2sagqvv5ay7i7ugkqdtz2idy

RecoGym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising [article]

David Rohde, Stephen Bonner, Travis Dunlop, Flavian Vasile, Alexandros Karatzoglou
2018 arXiv   pre-print
We believe that this is an important step forward for the field of recommendation systems research, that could open up an avenue of collaboration between the recommender systems and reinforcement learning  ...  To this end we introduce RecoGym, an RL environment for recommendation, which is defined by a model of user traffic patterns on e-commerce and the users response to recommendations on the publisher websites  ...  The recommendation literature on modeling organic user behavior is vast: often, organic recommendation is framed as a user-item matrix completion task with matrix factorization as a common approach [6  ... 
arXiv:1808.00720v2 fatcat:img6bmapjjak3a27wuicmta5pi

Interactive collaborative filtering

Xiaoxue Zhao, Weinan Zhang, Jun Wang
2013 Proceedings of the 22nd ACM international conference on Conference on information & knowledge management - CIKM '13  
We formulate the interactive CF with the probabilistic matrix factorization (PMF) framework, and leverage several exploitation-exploration algorithms to select items, including the empirical Thompson sampling  ...  Bringing the interactive mechanism back to the CF process is fundamental because the ultimate goal for a recommender system is about the discovery of interesting items for individual users and yet users  ...  For instance, latent factor models have become quite popular during the recent years [20] , while the matrix factorization techniques [22] have shown their effectiveness in various settings such as  ... 
doi:10.1145/2505515.2505690 dblp:conf/cikm/ZhaoZW13 fatcat:tzufqxtwtzffxjcimiqnqnbnfy
« Previous Showing results 1 — 15 out of 1,063 results