A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Hardening DNNs against Transfer Attacks during Network Compression using Greedy Adversarial Pruning
[article]
2022
arXiv
pre-print
The prevalence and success of Deep Neural Network (DNN) applications in recent years have motivated research on DNN compression, such as pruning and quantization. These techniques accelerate model inference, reduce power consumption, and reduce the size and complexity of the hardware necessary to run DNNs, all with little to no loss in accuracy. However, since DNNs are vulnerable to adversarial inputs, it is important to consider the relationship between compression and adversarial robustness.
arXiv:2206.07406v1
fatcat:rj4nfmxlajhurarq3n2dizc4e4