Integrally Private Model Selection For Decision Trees
Computers & security
Machine learning model space Privacy preserving machine learning a b s t r a c t Privacy attacks targeting machine learning models are evolving. One of the primary goals of such attacks is to infer information about the training data used to construct the models. "Integral Privacy" focuses on machine learning and statistical models which explain how we can utilize intruder's uncertainty to provide a privacy guarantee against model comparison attacks. Through experimental results, we show how
... distribution of models can be used to achieve integral privacy. Here, we observe two categories of machine learning models based on their frequency of occurrence in the model space. Then we explain the privacy implications of selecting each of them based on a new attack model and empirical results. Also, we provide recommendations for private model selection based on the accuracy and stability of the models along with the diversity of training data that can be used to generate the models. (N. Senavirathne), email@example.com (V. Torra). modifications applied to the training data. The privacy model further discusses desirable characteristics a machine learning model should have in order to avoid such disclosures. The basic idea is that an intruder should not be able to learn about the training data or the set of modifications by comparing machine learning models generated before and after a particular modification. In this paper, our primary focus is to provide recommendations for machine learning model selection so that the selected models are compliant with the integral privacy. For model selection, predictive accuracy is used as the principal criterion. However, with the increased usage of sensitive data, and the need for collaborative data analysis (i.e., multiparty computations), the degree of "privacy" a model provides over its underlying training data has become an important factor. With the evolution of attacks targeting the machine https://doi.