17,297 Hits in 2.3 sec

Discriminator-Free Generative Adversarial Attack [article]

Shaohao Lu, Yuqiao Xian, Ke Yan, Yi Hu, Xing Sun, Xiaowei Guo, Feiyue Huang, Wei-Shi Zheng
2021 pre-print
In this work, we find that the discriminatorcould be not necessary for generative-based adversarial attack, andpropose theSymmetric Saliency-based Auto-Encoder (SSAE)to generate the perturbations, which  ...  a GAN, the adversarial examples have either bad attack abilityor bad visual quality.  ...  .• We propose a novel discriminator-free generative adversarial attack network called SSAE.  ... 
doi:10.1145/3474085.3475290 arXiv:2107.09225v1 fatcat:dvctiyhr2vdxrpcow2dgd2ge5u

Detecting and Recovering Adversarial Examples from Extracting Non-robust and Highly Predictive Adversarial Perturbations [article]

Mingyu Dong and Jiahao Chen and Diqun Yan and Jingxing Gao and Li Dong and Rangding Wang
2022 arXiv   pre-print
In our method, the perturbation extractor can extract the adversarial perturbation from AEs as high-dimension feature, then the trained AEs discriminator determines whether the input is an AE.  ...  Thus, based on high-dimension perturbation extraction, we propose a model-free AEs detection method, the whole process of which is free from querying the victim model.  ...  We train the U-Net and discriminator only by a single attack method which is generated from one victim model.  ... 
arXiv:2206.15128v1 fatcat:7qizl36tjnhvbm34yrruww3wqa

An Adversarially-Learned Turing Test for Dialog Generation Models [article]

Xiang Gao, Yizhe Zhang, Michel Galley, Bill Dolan
2021 arXiv   pre-print
To alleviate this risk, we propose an adversarial training approach to learn a robust model, ATT (Adversarial Turing Test), that discriminates machine-generated responses from human-written replies.  ...  The key benefit of this unrestricted adversarial training approach is allowing the discriminator to improve robustness in an iterative attack-defense game.  ...  That is, the generated adversarial examples tend to be insensitive to the context, indicating that the generator finds a universal adversarial attacking pattern that can successfully attack HvM for most  ... 
arXiv:2104.08231v1 fatcat:jihc6lhmdjdntpmjveto45qzz4

An Adversarial Network-based Multi-model Black-box Attack

Bin Lin, Jixin Chen, Zhihong Zhang, Yanlin Lai, Xinlong Wu, Lulu Tian, Wangchi Cheng
2021 Intelligent Automation and Soft Computing  
Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN).  ...  Experimental results on MNIST showed that our method can efficiently generate adversarial examples.  ...  As a result, the adversarial examples generated by this trained model are able to attack CM D with a high attack success rate (ASR).  ... 
doi:10.32604/iasc.2021.016818 fatcat:24dw6aiptbanpnu6wtlcj5flhm

Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines [article]

Aidan Kehoe, Peter Wittek, Yanbo Xue, Alejandro Pozas-Kerstjens
2020 arXiv   pre-print
We provide a robust defence to adversarial attacks on discriminative algorithms.  ...  We use Boltzmann machines for discrimination purposes as attack-resistant classifiers, and compare them against standard state-of-the-art adversarial defences.  ...  However, since the adversarial examples are generated with a specific type of attack, the discriminator remains vulnerable to other types of attacks.  ... 
arXiv:2012.11619v1 fatcat:hcnmgznvsffdbnygdm6ydvkx3q

Push Stricter to Decide Better: A Class-Conditional Feature Adaptive Framework for Improving Adversarial Robustness [article]

Jia-Li Yin, Lehui Xie, Wanqing Zhu, Ximeng Liu, Bo-Hao Chen
2021 arXiv   pre-print
Specifically, we propose to incorporate a class-conditional discriminator to encourage the features become (1) class-discriminative and (2) invariant to the change of adversarial attacks.  ...  The novel FAAT framework enables the trade-off between natural and robust accuracy by generating features with similar distribution across natural and adversarial data, and achieve higher overall robustness  ...  Inspired by this, we first propose to adopt such a discriminator to encourage the invariant feature generation against the change of adversarial attacks.  ... 
arXiv:2112.00323v1 fatcat:qf4vdsli6jbadhruy2kan7nfau

Generative Adversarial Networks for Black-Box API Attacks with Limited Training Data [article]

Yi Shi, Yalin E. Sagduyu, Kemal Davaslioglu, Jason H. Li
2019 arXiv   pre-print
In return, a generative adversarial network (GAN) based on deep learning is built to generate synthetic training data from a limited number of real training data samples, thereby extending the training  ...  These stealth attacks with small footprint (using a small number of API calls) make adversarial machine learning practical under the realistic case with limited training data available to the adversary  ...  The number of calls made by a user depends on the license and is limited in general, e.g., 1000 calls per day for free license of DatumBox. The adversary launches a black-box exploratory attack.  ... 
arXiv:1901.09113v1 fatcat:xubnsti6obaitjzd7lzz2b2yvm

Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [article]

Bin Yan and Dong Wang and Huchuan Lu and Xiaoyun Yang
2020 arXiv   pre-print
Adversarial attack of CNN aims at deceiving models to misbehave by adding imperceptible perturbations to images.  ...  Although several works have focused on attacking image classifiers and object detectors, an effective and efficient method for attacking single object trackers of any target in a model-free way remains  ...  Most previous neural-networkbased adversarial attack methods [34, 31] adopt GAN structure, using a discriminator to supervise the adversarial out- put of the generator to be similar to the original input  ... 
arXiv:2003.09595v1 fatcat:xy33cpybene75ks5nwc5eewho4

Cooling-Shrinking Attack: Blinding the Tracker With Imperceptible Noises

Bin Yan, Dong Wang, Huchuan Lu, Xiaoyun Yang
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Adversarial attack of CNN aims at deceiving models to misbehave by adding imperceptible perturbations to images.  ...  Although several works have focused on attacking image classifiers and object detectors, an effective and efficient method for attacking single object trackers of any target in a model-free way remains  ...  Most previous neural-networkbased adversarial attack methods [34, 31] adopt GAN structure, using a discriminator to supervise the adversarial out- put of the generator to be similar to the original  ... 
doi:10.1109/cvpr42600.2020.00107 dblp:conf/cvpr/YanWLY20 fatcat:hez2mapqnrft3grnv5mhonbh3q

Distilling Discrimination and Generalization Knowledge for Event Detection via Delta-Representation Learning

Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun
2019 Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics  
Event detection systems rely on discrimination knowledge to distinguish ambiguous trigger words and generalization knowledge to detect unseen/sparse trigger words.  ...  Current neural event detection approaches focus on trigger-centric representations, which work well on distilling discrimination knowledge, but poorly on learning generalization knowledge.  ...  For example, to identify the unseen word hacked in S5 as an Attack trigger, an ED system needs to distill the generalized Attack pattern "[Trigger] to death" from S3.  ... 
doi:10.18653/v1/p19-1429 dblp:conf/acl/LuLHS19 fatcat:3l6r5jjsxzh2bplovz7wd5wduu

Adversarial Machine Learning in Text Analysis and Generation [article]

Izzat Alsmadi
2021 arXiv   pre-print
This paper focuses on studying aspects and research trends in adversarial machine learning specifically in text analysis and generation.  ...  A machine learner or model is secure if it can deliver main objectives with acceptable accuracy, efficiency, etc. while at the same time, it can resist different types and/or attempts of adversarial attacks  ...  Defense Against NLP Adversarial Attacks Generating adversarial attacks on text has shown to be more challenging than for images and audios due to their discrete nature. • Dependency parsing, .  ... 
arXiv:2101.08675v1 fatcat:73b3v35oebefnhzuuuo52jpdtu

Targeted Speech Adversarial Example Generation with Generative Adversarial Network

Donghua Wang, Li Dong, Rangding Wang, Diqun Yan, Jie Wang
2020 IEEE Access  
Meanwhile, the discriminator constantly stimulates the generator to generate the limited perturbation, so that the adversarial examples it constitutes can simultaneously fool the discriminator and attain  ...  For the discriminator, its goal is to distinguish the generated adversarial example (fake) from the geniue ones (true).  ... 
doi:10.1109/access.2020.3006130 fatcat:w7tksb7qujdifpilsd27776ory

Scale-free Photo-realistic Adversarial Pattern Attack [article]

Xiangbo Gao, Weicheng Xie, Minmin Liu, Cheng Luo, Qinliang Lin, Linlin Shen, Keerthy Kusumam, Siyang Song
2022 arXiv   pre-print
In this paper, we propose a scale-free generation-based attack algorithm that synthesizes semantically meaningful adversarial patterns globally to images with arbitrary scales.  ...  Although Generative Adversarial Networks (GAN) can partially address this problem by synthesizing a more semantically meaningful texture pattern, the main limitation is that existing generators can only  ...  Applying PQ generator for image attack Given a trained scale-free pattern generator PQG, the adversarial attack process is fairly straight-forward.  ... 
arXiv:2208.06222v1 fatcat:ukmsnwqqy5fndjp4ftnxkv7ina

Exploring Robustness of Unsupervised Domain Adaptation in Semantic Segmentation [article]

Jinyu Yang, Chunyuan Li, Weizhi An, Hehuan Ma, Yuzhi Guo, Yu Rong, Peilin Zhao, Junzhou Huang
2021 arXiv   pre-print
Extensive empirical studies on commonly used benchmarks demonstrate that ASSUDA is resistant to adversarial attacks.  ...  although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative  ...  Therefore, we can get supervision for free and encourage our method to learn discriminative representation for segmentation tasks.  ... 
arXiv:2105.10843v2 fatcat:hrr2myxiwnfczf4dh7zgjutvfe

Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification

Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, Wei Wang
2019 Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)  
To identify adversarial attacks, a perturbation discriminator validates how likely a token in the text is perturbed and provides a set of potential perturbations.  ...  In this paper, we propose a novel framework, learning to discriminate perturbations (DISP), to identify and adjust malicious perturbations, thereby blocking adversarial attacks for text classification  ...  adversarial networks to generate perturbation-free images.  ... 
doi:10.18653/v1/d19-1496 dblp:conf/emnlp/ZhouJCW19 fatcat:2viyvdnthnedrf7siafirasr2y
« Previous Showing results 1 — 15 out of 17,297 results