3,698 Hits in 4.7 sec

Benchmarking adversarial attacks and defenses for time-series data [article]

Shoaib Ahmed Siddiqui, Andreas Dengel, Sheraz Ahmed
2020 arXiv   pre-print
This paves the way for future research in the direction of adversarial attacks and defenses, particularly for time-series data.  ...  In this paper, we perform detailed benchmarking of well-proven adversarial defense methodologies on time-series data. We restrict ourselves to the L_∞ threat model.  ...  In this paper, we employ some of the most well-recognized defense methodologies tested on images and evaluate their robustness for time-series data to establish a proper benchmark.  ... 
arXiv:2008.13261v1 fatcat:qxu7wzknn5eobleqks6omtbm4a

Towards Robust Adversarial Training via Dual-label Supervised and Geometry Constraint

Liujuan Cao, Media Analytics and Computing Laboratory, Department of Artificial Intelligence, School of Informatics, Xiamen University, Xiamen 361005, China, Huafeng Kuang, Hong Liu, Yan Wang, Baochang Zhang, Feiyue Huang, Yongjian Wu, Rongrong Ji
2022 International Journal of Software and Informatics  
the geometric relationship between samples to learn a more robust model for better defense against adversarial attacks.  ...  Recent studies have shown that adversarial training is an effective method to defend against adversarial sample attacks.  ...  Adversarial attacks As a series of attack methods are proposed, many defense strategies have been developed to defend against adversarial attacks. For example, Papernot et al.  ... 
doi:10.21655/ijsi.1673-7288.00268 fatcat:kqkgw4gvlrathiiugtoemarmge

Disentangled Deep Autoencoding Regularization for Robust Image Classification [article]

Zhenyu Duan, Martin Renqiang Min, Li Erran Li, Mingbo Cai, Yi Xu, Bingbing Ni
2019 arXiv   pre-print
neural networks for image classification on robustness against adversarial attacks and generalization to novel test data.  ...  Our framework effectively learns disentangled appearance code and geometric code for robust image classification, which is the first disentangling based method defending against adversarial attacks and  ...  Our model does not require time-series data and is suitable for independently sampled image data.  ... 
arXiv:1902.11134v1 fatcat:n5rfp3rbqzg2pnjpgjp7zy5fni

Adversarial Framework with Certified Robustness for Time-Series Domain via Statistical Features

Taha Belkhouja, Janardhan Rao Doppa
2022 The Journal of Artificial Intelligence Research  
Our experiments on diverse real-world benchmark datasets show the effectiveness of TSA-STAT in fooling DNNs for time-series domain and in improving their robustness.  ...  To address the unique challenges of time-series domain, TSA-STAT employs constraints on statistical features of the time-series data to construct adversarial examples.  ...  This research is supported in part by the AgAID AI Institute for Agriculture Decision Support, supported by the National Science Foundation and United States Department of Agriculture -National Institute  ... 
doi:10.1613/jair.1.13543 fatcat:wkeqnwcgsvfxpd6qwunzczhelm

On Procedural Adversarial Noise Attack And Defense [article]

Jun Yan and Xiaoyang Deng and Huilin Yin and Wancheng Ge
2021 arXiv   pre-print
Procedural adversarial noise attack is a data-free universal perturbation generation method.  ...  Researchers have been devoted to promoting the research on the universal adversarial perturbations (UAPs) which are gradient-free and have little prior knowledge on data distributions.  ...  The authors would like to thank TUEV SUED for the kind and generous support.  ... 
arXiv:2108.04409v2 fatcat:fxtrtwktbjclfitm53c32go6zi

RoVISQ: Reduction of Video Service Quality via Adversarial Attacks on Deep Learning-based Video Compression [article]

Jung-Woo Chang, Mojan Javaheripi, Seira Hidano, Farinaz Koushanfar
2022 arXiv   pre-print
We empirically show the resilience of RoVISQ attacks against various defenses, i.e., adversarial training, video denoising, and JPEG compression.  ...  In this paper, we conduct the first systematic study for adversarial attacks on deep learning-based video compression and downstream classification systems.  ...  Defense for Video Classification. We now present the defense results against RoVISQ attacks for the video compression and classification system. Here, we benchmark our bandwidth attack.  ... 
arXiv:2203.10183v2 fatcat:vwzz5tc4q5c4laamii6eesvdry

Improving the Generalization of Adversarial Training with Domain Adaptation [article]

Chuanbiao Song and Kun He and Liwei Wang and John E. Hopcroft
2019 arXiv   pre-print
By injecting adversarial examples into training data, adversarial training is promising for improving the robustness of deep learning models.  ...  To show the transfer ability of our method, we also extend ATDA to the adversarial training on iterative attacks such as PGD-Adversial Training (PAT) and the defense performance is improved considerably  ...  ATDA) method to defense adversarial attacks and expect the learned models generalize well for various adversarial examples.  ... 
arXiv:1810.00740v7 fatcat:5bi625i4rna4re4pokgydr3dem

Morphence: Moving Target Defense Against Adversarial Examples [article]

Abderrahmen Amich, Birhanu Eshete
2021 arXiv   pre-print
In all cases, Morphence consistently outperforms the thus-far effective defense, adversarial training, even in the face of strong white-box attacks, while preserving accuracy on clean data.  ...  We evaluate Morphence on two benchmark image classification datasets (MNIST and CIFAR10) against five reference attacks (2 white-box and 3 black-box).  ...  ACKNOWLEDGEMENTS We are grateful to the anonymous reviewers for their insightful feedback that improved this paper.  ... 
arXiv:2108.13952v3 fatcat:4zhaa7imergxbgq24ztjp4xs3a

Harden Deep Convolutional Classifiers via K-Means Reconstruction

Fu Wang, Liu He, Wenfen Liu, Yanbin Zheng
2020 IEEE Access  
Comprehensive comparison and evaluation have been conducted to investigate our proposal, where the models protected by the proposed defense show substantial robustness to strong adversarial attacks.  ...  Our approach does not rely on any neural network architectures and can also work with existing pre-processing defenses to provide better protection for modern classifiers.  ...  As for DeepFool, it can be looped up to 100 times for searching each adversarial example.  ... 
doi:10.1109/access.2020.3024197 fatcat:uk7jrhbhcndw3flwxuzapqn4fm

GUARD: Graph Universal Adversarial Defense [article]

Jintang Li, Jie Liao, Ruofan Wu, Liang Chen, Jiawang Dan, Changhua Meng, Zibin Zheng, Weiqiang Wang
2022 arXiv   pre-print
Extensive experiments on four benchmark datasets demonstrate that our method significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms state-of-the-art  ...  However, current approaches for defense are typically designed for the whole graph and consider the global performance, posing challenges in protecting important local nodes from stronger adversarial targeted  ...  Graph convolutional networks (GCNs) [14] , a series of neural network models primarily developed for graph structured data, have met with great success in a variety of applications and domains [10] .  ... 
arXiv:2204.09803v2 fatcat:mwewm5aph5burlchlfbcslnmsm

Adversarial samples for deep monocular 6D object pose estimation [article]

Jinlai Zhang, Weiming Li, Shuang Liang, Hao Wang, Jihong Zhu
2022 arXiv   pre-print
Extensive experiments were conducted to demonstrate the effectiveness, transferability, and anti-defense capability of our U6DA on large-scale public benchmarks.  ...  In this work, for the first time, we study adversarial samples that can fool deep learning models with imperceptible perturbations to input image.  ...  Inspired by such concerns, to the best of our knowledge, this work for the first time studies adversarial samples for the monocular 6D object pose estimation task.  ... 
arXiv:2203.00302v2 fatcat:ph6cwoijijb5zonbnktc2zlhfu

Improving Deep Learning Model Robustness Against Adversarial Attack by Increasing the Network Capacity [article]

Marco Marchetti, Edmond S. L. Ho
2022 arXiv   pre-print
Experiments are conducted to identify the strengths and weaknesses of a new approach to improve the robustness of DL models against adversarial attacks.  ...  The results show improvements and new ideas that can be used as recommendations for researchers and practitioners to create increasingly better DL algorithms.  ...  Experimental results In the experiments, we evaluate the effectiveness of the DeepSec platform [15] , different adversarial attacks (CW2 and PGD) and defenses (NAT and PAT) on benchmark datasets MNIST  ... 
arXiv:2204.11357v1 fatcat:us7s32hvnnby3hqws2gdoyq4ka

Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks [article]

Xiaosen Wang, Yichen Yang, Yihe Deng, Kun He
2020 arXiv   pre-print
Thereby, we propose a fast text adversarial attack method called Fast Gradient Projection Method (FGPM) based on synonym substitution, which is about 20 times faster than existing text attack methods and  ...  Gradient-based attacks, which are very efficient for images, are hard to be implemented for synonym substitution based text attacks due to the lexical, grammatical and semantic constraints and the discrete  ...  We thank Kai-Wei Chang for helpful suggestions on our work.  ... 
arXiv:2008.03709v4 fatcat:rfh2runnjrgltajawxgutymomi

Measuring the False Sense of Security [article]

Carlos Gomes
2022 arXiv   pre-print
These are computationally cheaper than strong attacks, enable comparisons between models, and do not require the large time investment of tailor-made attacks for specific models.  ...  Recently, several papers have demonstrated how widespread gradient masking is amongst proposed adversarial defenses.  ...  models • they may prevent the time-consuming task of designing a tailor-made attack for every individual defense.  ... 
arXiv:2204.04778v1 fatcat:hefdyd3cenavbozc2udrkqvb3q

Making machine learning robust against adversarial inputs

Ian Goodfellow, Patrick McDaniel, Nicolas Papernot
2018 Communications of the ACM  
countermeasures exist for the many attacks that have been demonstrated. ˽ To end the arms race between attackers and defenders, we suggest building more tools for verifying machine learning models; unlike  ...  algorithms, this implicitly rules out the possibility that an adversary could alter the distribution at either training time or test time. ˽ In the context of adversarial inputs at test time, few strong  ...  While such standardized testing of attacks and defenses does not substitute in any way to rigorous verification, it does provide a common benchmark.  ... 
doi:10.1145/3134599 fatcat:b4lnjl3kdzabxpeqh4itbefcda
« Previous Showing results 1 — 15 out of 3,698 results