A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
On the Robustness of Semantic Segmentation Models to Adversarial Attacks
2018
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs,
doi:10.1109/cvpr.2018.00099
dblp:conf/cvpr/ArnabMT18
fatcat:jflpkjnihbd5toazufxb36cbpq