6,315 Hits in 5.2 sec

Fashion-Guided Adversarial Attack on Person Segmentation [article]

Marc Treu, Trung-Nghia Le, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
2021 arXiv   pre-print
We propose a novel Fashion-Guided Adversarial Attack (FashionAdv) framework to automatically identify attackable regions in the target image to minimize the effect on image quality.  ...  It generates adversarial textures learned from fashion style images and then overlays them on the clothing regions in the original image to make all persons in the image invisible to person segmentation  ...  It also includes further exploration of adversarial attacks on general instance segmentation. Figure 1 . 1 Overview of our proposed Fashion-Guided Adversarial Attack (FashionAdv).  ... 
arXiv:2104.08422v2 fatcat:sclcnqor3ngvjb7biutojjpxam

SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing [article]

Haonan Qiu, Chaowei Xiao, Lei Yang, Xinchen Yan, Honglak Lee, Bo Li
2020 arXiv   pre-print
targeted attack success rate against real-world black-box services such as Azure face verification service based on transferability.  ...  Such adversarial examples with controlled semantic manipulation can shed light on further understanding about vulnerabilities of DNNs as well as potential defensive approaches.  ...  One of the key advantages of our SemanticAdv is that we can generate adversarial perturbations in a more controllable fashion guided by the selected semantic attribute.  ... 
arXiv:1906.07927v4 fatcat:tyduj5qtsjhcbiyeqhybb3xhmm

Robust Adversarial Perturbation on Deep Proposal-based Models [article]

Yuezun Li, Daniel Tian, Ming-Ching Chang, Xiao Bian, Siwei Lyu
2019 arXiv   pre-print
Evaluations are performed on the MS COCO 2014 dataset for the adversarial attacking of 6 state-of-the-art object detectors and 2 instance segmentation algorithms.  ...  Our method focuses on attacking the common component in these algorithms, namely Region Proposal Network (RPN), to universally degrade their performance in a black-box fashion.  ...  Due to the degradation of RPN after R-AP attack, the person in FR-rn50 (b) is not detected.  ... 
arXiv:1809.05962v2 fatcat:njrzt7sixjculij267wmehkhie

On Saliency Maps and Adversarial Robustness [article]

Puneet Mangla, Vedant Singh, Vineeth N Balasubramanian
2020 arXiv   pre-print
A Very recent trend has emerged to couple the notion of interpretability and adversarial robustness, unlike earlier efforts which solely focused on good interpretations or robustness against adversaries  ...  In particular, we show that using annotations such as bounding boxes and segmentation masks, already provided with a dataset, as weak saliency maps, suffices to improve adversarial robustness with no additional  ...  Improved robustness to stronger PGD attacks on CIFAR-100. GBP: Guided-Backpropogation; G.CAM+: Grad-CAM++. used in SAT.  ... 
arXiv:2006.07828v2 fatcat:wbglaoimsfew7fs6sf4akh6pk4

Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models [article]

Adith Boloor, Xin He, Christopher Gill, Yevgeniy Vorobeychik, Xuan Zhang
2019 arXiv   pre-print
We demonstrate the first end-to-end attacks on autonomous driving in simulation, using simple physically realizable attacks: the painting of black lines on the road.  ...  Moreover, attacks typically involve carefully constructed adversarial examples at the level of pixels.  ...  Visualization of the camera and the third person views from one attack episode are also shown. Fig. 3 . 3 Comparison of the infractions caused by different patterns.  ... 
arXiv:1903.05157v1 fatcat:d4mxmz4o7rfytpglp2wtiwkgji

Garment Design with Generative Adversarial Networks [article]

Chenxi Yuan, Mohsen Moghaddam
2020 arXiv   pre-print
large fashion dataset.  ...  This paper explores the capabilities of generative adversarial networks (GAN) for automated attribute-level editing of design concepts.  ...  Different from conventional adversarial attacks [38] [39] [40] , attribute editing involves making translations/adjustments to images based on the target attributes to generate a new sample with desired  ... 
arXiv:2007.10947v2 fatcat:vf2fbg4flvgvxdr4kvg6qh6k44

A Review on Visual Privacy Preservation Techniques for Active and Assisted Living [article]

Siddharth Ravi, Pau Climent-Pérez, Francisco Florez-Revuelta
2021 arXiv   pre-print
Acknowledgements This work is part of the visuAAL project on Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (  ...  The authors would also like to acknowledge the contribution of COST Action CA19121 -GoodBrother, Network on Privacy-Aware Audio-and Video-Based Applications for Active and Assisted Living (  ...  Adaptive blurring [Zhang et al., 2021] is an algorithm that relies on semantic segmentation masks to guide the process of blurring on videos. The model relies on two steps.  ... 
arXiv:2112.09422v1 fatcat:rf2zx3vrq5esnn2dujo6h3scri

Attacking Vision-based Perception in End-to-End Autonomous Driving Models [article]

Adith Boloor, Karthik Garimella, Xin He, Christopher Gill, Yevgeniy Vorobeychik, Xuan Zhang
2019 arXiv   pre-print
We present novel end-to-end attacks on autonomous driving in simulation, using simple physically realizable attacks: the painting of black lines on the road.  ...  However, deep learning-based perception has been shown to be vulnerable to a host of subtle adversarial manipulations of images.  ...  Ayan Chakrabarti for his advice on matters related to computer vision with this research and Dr. Roman Garnett for his suggestions regarding Bayesian Optimization.  ... 
arXiv:1910.01907v1 fatcat:nqqwhxznzjh4bi5p56kccx7o7e

Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification [article]

Qi Lei, Lingfei Wu, Pin-Yu Chen, Alexandros G. Dimakis, Inderjit S. Dhillon, Michael Witbrock
2019 arXiv   pre-print
This finding guarantees a 1-1/e approximation factor for attacks that use the greedy algorithm. Meanwhile, we show how to use the gradient of the attacked classifier to guide the greedy search.  ...  In this paper we formulate the attacks with discrete input on a set function as an optimization task.  ...  For instance, the attacks include but are not limited to malware detection, spam filtering, or even discrete attacks defined on continuous data, e.g., segmentation of an image.  ... 
arXiv:1812.00151v2 fatcat:jgba6m3ykzbmhfkt7glw7mtdle

Survey: Leakage and Privacy at Inference Time [article]

Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick Murray-Smith, Sotirios A. Tsaftaris
2021 arXiv   pre-print
We provide a comprehensive survey of contemporary advances on several fronts, covering involuntary data leakage which is natural to ML models, potential malevolent leakage which is caused by privacy attacks  ...  Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance as commercial and government applications of ML can draw on multiple sources of data, potentially  ...  The privacy risks of sharing a medical image segmentation model publicly have been studied by [64] for linkage attacks, who showed that most state-of-the-art semantic segmentation models would be vulnerable  ... 
arXiv:2107.01614v1 fatcat:76a724yzkjfvjisrokssl6assa

Table of Contents

2019 2019 IEEE/CVF International Conference on Computer Vision (ICCV)  
NVIDIA), and Raquel Urtasun (Uber ATG) Asymmetric Cross-Guided Attention Network for Actor and Action Video Segmentation From Natural Global-Local Temporal Representations for Video Person Re-Identification  ...  via Learning Multi-Target Adversarial Network Once IMP: Instance Mask Projection for High Accuracy Semantic Segmentation of Things 5177 Cheng-Yang Fu (UNC-Chapel Hill), Tamara Berg (University on North  ... 
doi:10.1109/iccv.2019.00004 fatcat:5aouo4scprc75c7zetsimylj2y

Explainable AI: A Review of Machine Learning Interpretability Methods

Pantelis Linardatos, Vasilis Papastefanopoulos, Sotiris Kotsiantis
2020 Entropy  
This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations  ...  [115] on adversarial examples and the weaknesses of deep learning models against adversarial attacks.  ...  Studies on sensitivity analysis over the recent years have focussed on exposing the weaknesses of deep learning models and their vulnerability against adversarial attacks.  ... 
doi:10.3390/e23010018 pmid:33375658 pmcid:PMC7824368 fatcat:gv42gcovm5cxzl2kmdsluiegdi

Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety [article]

Sebastian Houben, Stephanie Abrecht, Maram Akila, Andreas Bär, Felix Brockherde, Patrick Feifel, Tim Fingscheidt, Sujan Sai Gannamaneni, Seyed Eghbal Ghobadi, Ahmed Hammam, Anselm Haselhoff, Felix Hauser (+29 others)
2021 arXiv   pre-print
We moreover hope that our contribution fuels discussions on desiderata for ML systems and strategies on how to propel existing approaches accordingly.  ...  The latter ones might gain insights into the specifics of modern ML methods.  ...  Some notable examples in the first category of attacks include attacks on semantic segmentation [HMCKBF17] or person detection [TVRG19] .  ... 
arXiv:2104.14235v1 fatcat:f6sj3v2brza7thyzw7b7fkpo2m

Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack [article]

Luca Pajola, Mauro Conti
2021 arXiv   pre-print
Among MLaaS, text-based applications are the most popular ones (e.g., language translators). Given this popularity, MLaaS must provide resiliency to adversarial manipulations.  ...  In the text domain, state-of-the-art attacks mainly focus on strategies that leverage ML models' weaknesses.  ...  ., we obtain one personality for each corpus.  ... 
arXiv:2104.05996v1 fatcat:r6kbzwqpo5f6to7hp3pahz6tqm

Adversarial Examples in Modern Machine Learning: A Review [article]

Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker
2019 arXiv   pre-print
We explore a variety of adversarial attack methods that apply to image-space content, real world adversarial attacks, adversarial defenses, and the transferability property of adversarial examples.  ...  We also discuss strengths and weaknesses of various methods of adversarial attack and defense.  ...  They showed how this method can be used to reliably detect FGSM adversaries on MNIST [116] and Fashion MNIST (F-MNIST) [119] datasets with fairly high AUC score.  ... 
arXiv:1911.05268v2 fatcat:majzak4sqbhcpeahghh6sm3dwq
« Previous Showing results 1 — 15 out of 6,315 results