Membership Inference Attacks Against Object Detection Models [article]

Yeachan Park, Myungjoo Kang
2020 arXiv   pre-print
Machine learning models can leak information regarding the dataset they have trained. In this paper, we present the first membership inference attack against black-boxed object detection models that determines whether the given data records are used in the training. To attack the object detection model, we devise a novel method named as called a canvas method, in which predicted bounding boxes are drawn on an empty image for the attack model input. Based on the experiments, we successfully
more » ... l the membership status of privately sensitive data trained using one-stage and two-stage detection models. We then propose defense strategies and also conduct a transfer attack between the models and datasets. Our results show that object detection models are also vulnerable to inference attacks like other models.
arXiv:2001.04011v2 fatcat:ewj5jn5aajdgjbykjzbnlcgyk4