7 Hits in 3.8 sec

Improving Transformer-based Sequential Recommenders through Preference Editing [article]

Muyang Ma, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Huasheng Liang, Jun Ma, Maarten de Rijke
2021 arXiv   pre-print
Then, we devise a preference editing-based self-supervised learning mechanism for training MrTransformer which contains two main operations: preference separation and preference recombination.  ...  One of the key challenges in Sequential Recommendation (SR) is how to extract and represent user preferences.  ...  ACKNOWLEDGMENTS This research was partially supported by the National Key R&D Program of China with grant No. 2020YFB1406704, the Natural Science Foundation of China (61972234, 61902219, 62072279), the  ... 
arXiv:2106.12120v1 fatcat:4fvlg72ldvcjfep3cipepifhh4

Contrastive Self-supervised Sequential Recommendation with Robust Augmentation [article]

Zhiwei Liu, Yongjun Chen, Jia Li, Philip S. Yu, Julian McAuley, Caiming Xiong
2021 arXiv   pre-print
To this end, we propose a novel framework, Contrastive Self-supervised Learning for sequential Recommendation (CoSeRec).  ...  In this paper, we investigate the application of contrastive Self-Supervised Learning (SSL) to the sequential recommendation, as a way to alleviate some of these issues.  ...  To this end, we propose a new framework, Contrastive Self-Supervised learning for Sequential Recommendation (CoSeRec).  ... 
arXiv:2108.06479v1 fatcat:tamq3iuqbjezlnwcnnxzfvre6q

Collaborative Graph Learning for Session-based Recommendation

Zhiqiang Pan, Fei Cai, Wanyu Chen, Chonghao Chen, Honghui Chen
2022 ACM Transactions on Information Systems  
supervisions for learning the item representations.  ...  Thus, in this article, we propose a Collaborative Graph Learning (CGL) approach for session-based recommendation.  ...  [73] propose S3-Rec which applies the mutual information maximization (MIM) in sequential recommendation by pretraining and fine-tunes the model by the supervised signals. In addition, Ma et al.  ... 
doi:10.1145/3490479 fatcat:c3xhg3pesbbnhh4zzjrths6xbi

On the Effectiveness of Sampled Softmax Loss for Item Recommendation [article]

Jiancan Wu, Xiang Wang, Xingyu Gao, Jiawei Chen, Hongcheng Fu, Tianyu Qiu, Xiangnan He
2022 arXiv   pre-print
Its special case, InfoNCE loss, has been widely used in self-supervised learning and exhibited remarkable performance for contrastive learning.  ...  informative gradients to optimize model parameters; and (3) maximizing the ranking metric, which facilitates top-K performance.  ...  Collaborative Filtering for Zhongyuan Wang, and Ji-Rong Wen. 2020. S3-Rec: Self-Supervised Learning for Implicit Feedback Datasets.  ... 
arXiv:2201.02327v1 fatcat:xxcxorondvbulge3q6pgfbgqee

Hierarchical Transformers for Group-Aware Sequential Recommendation: Application in MOBA Games

Vladimir Araujo, Helem Salinas, Alvaro Labarca, Andrés Villa, Denis Parra
2022 User Modeling, Adaptation, and Personalization  
To fill this gap, in this work, we propose HT4Rec for group-aware sequential item recommendation.  ...  In Multiplayer Online Battle Arena (MOBA) games, a popular game genre, these systems are useful for recommending items for a character during a match.  ...  ACKNOWLEDGMENTS The main author thanks Jorge Esteban Martínez for insightful discussions about the DOTA game.  ... 
doi:10.1145/3511047.3537667 dblp:conf/um/AraujoSLVP22 fatcat:fqrzhjx3lvhcxpv53gcs3biyyi

Intent Contrastive Learning for Sequential Recommendation [article]

Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, Caiming Xiong
2022 pre-print
The core idea is to learn users' intent distribution functions from unlabeled user behavior sequences and optimize SR models with contrastive self-supervised learning (SSL) by considering the learned intents  ...  Users' interactions with items are driven by various intents (e.g., preparing for holiday gifts, shopping for fishing equipment, etc.).However, users' underlying intents are often unobserved/latent, making  ...  Contrastive Self-Supervised Learning Contrastive Self-Supervised Learning (SSL) has brought much attentions by different research communities including CV [2, 4, 9, 14, 20] and NLP [7, 8, 29, 50] ,  ... 
doi:10.1145/3485447.3512090 arXiv:2202.02519v1 fatcat:gft7ku773zcjha3i6troi5gf4y

Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models

Kun Zhou, Wayne Xin Zhao, Sirui Wang, Fuzheng Zhang, Wei Wu, Ji-Rong Wen
2021 Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing   unpublished
S3-rec: Self-supervised learn- EMNLP 2015, Lisbon, Portugal, September 17-21, ing for sequential recommendation with mutual in- 2015, pages 2557–2563.  ...  learning to converse using smaller data with augmentation.  ... 
doi:10.18653/v1/2021.emnlp-main.315 fatcat:hnbvwxrf5rbqdfp6aan5zjfi4q