Using Rényi-divergence and Arimoto-Rényi Information to Quantify Membership Information Leakage

Farhad Farokhi
2021 2021 55th Annual Conference on Information Sciences and Systems (CISS)  
Membership inference attacks, i.e., adversarial attacks inferring whether a data record is used for training a machine learning model, has been recently shown to pose a legitimate privacy risk in machine learning literature. In this paper, we propose two measures of information leakage for investigating membership inference attacks backed by results on binary hypothesis testing in information theory literature. The first measure of information leakage is defined using Rényi αdivergence of the
more » ... stribution of output of a machine learning model for data records that are in and out of the training dataset. The second measure of information leakage is based on Arimoto-Rényi α-information between the membership random variable (whether the data record is in or out of the training dataset) and the output of the machine learning model. These measures of leakage are shown to be related to each other. We compare the proposed measures of information leakage with α-leakage from the information-theoretic privacy literature to establish some useful properties. We establish an upper bound for α-divergence information leakage as a function of the privacy budget for differentially-private machine learning models.
doi:10.1109/ciss50987.2021.9400316 fatcat:5sbtkdtz45e37bopo5ewjtpcqm