Inversion Attacks against CNN Models Based on Timing Attack

Zhaohe Chen, Ming Tang, Jinghai Li, Leo Yu Zhang
2022 Security and Communication Networks  
Model confidentiality attacks on convolutional neural networks (CNN) are becoming more and more common. At present, model reverse attack is an important means of model confidentiality attacks, but all of these attacks require strong attack ability, meanwhile, the success rates of these attacks are low. We study the time leakage of CNN running on the SoC (system on-chip) system and propose a reverse method based on side-channel attack. It uses the SDK tool-profiler to collect the time leakage of
more » ... different networks of various CNNs. According to the linear relationship between time leakage, calculation, and memory usage parameters, we take the profiling attack to establish a mapping library of time and the different networks. After that, the smallest difference between the measured time of unknown models and the theoretical time in the mapping library is considered to be the real parameters of the unknown models. Finally, we can reverse other layers even the entire model. Based on the experiments, the reverse success rate of common convolutional layers is above 78.5%, and the reverse success rates of different CNNs (such as AlexNet, ConvNet, LeNet, etc.) are all above 67.67%. Moreover, the results show that the success rate of our method is 10% higher than the traditional methods on average. In the adversarial sample attack, the success rate reached 97%.
doi:10.1155/2022/6285909 fatcat:iqwcgm3k2fbtnbfvod4434uh3m