Unsupervised Risk for Privacy

Christophe Cerisara, Alfredo Cuzzocrea
2021 2021 IEEE International Conference on Big Data (Big Data)  
This position paper deals with privacy for deep neural networks, more precisely with robustness to membership inference attacks. The current state-of-the-art methods, such as the ones based on differential privacy and training loss regularization, mainly propose approaches that try to improve the compromise between privacy guarantees and decrease in model accuracy. We propose a new research direction that challenges this view, and that is based on novel approximations of the training objective
more » ... f deep learning models. The resulting loss offers several important advantages with respect to both privacy and model accuracy: it may exploit unlabeled corpora, it both regularizes the model and improves its generalization properties, and it encodes corpora into a latent low-dimensional parametric representation that complies with Federated Learning architectures. Arguments are detailed in the paper to support the proposed approach and its potential beneficial impact with regard to preserving both privacy and quality of deep learning.
doi:10.1109/bigdata52589.2021.9671539 fatcat:bkokfhevnjarvjk5ykybbambbq