A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is
The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images. ... Recent work shows a large number of attack iterations are required to create effective adversarial examples to fool segmentation models. ... Acknowledgement This work is supported by the UKRI grant: Turing AI Fellowship EP/W002981/1, EPSRC/MURI grant: EP/N019474/1, HKU Startup Fund, and HKU Seed Fund for Basic Research. ...arXiv:2207.12391v1 fatcat:vlbebtbitvexdanuamcxrkrnpy