A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Privacy and Fairness in Recommender Systems via Adversarial Training of User Representations
[article]
2018
arXiv
pre-print
Latent factor models for recommender systems represent users and items as low dimensional vectors. Privacy risks of such systems have previously been studied mostly in the context of recovery of personal information in the form of usage records from the training data. However, the user representations themselves may be used together with external data to recover private user information such as gender and age. In this paper we show that user vectors calculated by a common recommender system can
arXiv:1807.03521v3
fatcat:2myhg3taabgljino3rwjzzmk6q