Filters








1 Hit in 2.5 sec

SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation Robustness [article]

Jindong Gu, Hengshuang Zhao, Volker Tresp, Philip Torr
2022 arXiv   pre-print
The image classifications can be easily fooled by adding artificial small and imperceptible perturbations to input images.  ...  Recent work shows a large number of attack iterations are required to create effective adversarial examples to fool segmentation models.  ...  Acknowledgement This work is supported by the UKRI grant: Turing AI Fellowship EP/W002981/1, EPSRC/MURI grant: EP/N019474/1, HKU Startup Fund, and HKU Seed Fund for Basic Research.  ... 
arXiv:2207.12391v1 fatcat:vlbebtbitvexdanuamcxrkrnpy