Filters








3,517 Hits in 3.3 sec

Robust Reinforcement Learning using Adversarial Populations [article]

Eugene Vinitsky and Yuqing Du and Kanaad Parvate and Kathy Jang and Pieter Abbeel and Alexandre Bayen
2020 arXiv   pre-print
exploitable by new adversaries.  ...  The Robust RL formulation tackles this by adding worst-case adversarial noise to the dynamics and constructing the noise distribution as the solution to a zero-sum minimax game.  ...  Acknowledgments The authors would like to thank Lerrel Pinto for help understanding and reproducing "Robust Adversarial Reinforcement Learning" as well as insightful discussions of our problem.  ... 
arXiv:2008.01825v2 fatcat:xa6n2sf7cffwvlwsnaivz42quq

Adversarial Bone Length Attack on Action Recognition [article]

Nariki Tanaka, Hiroshi Kera, Kazuhiko Kawamoto
2022 arXiv   pre-print
Existing attacks resolve this by exploiting the temporal structure of the skeleton motion so that the perturbation dimension increases to thousands.  ...  Compared to adversarial attacks on images, perturbations to skeletons are typically bounded to a lower dimension of approximately 100 per frame.  ...  With the NTU RGB+D dataset , the SGN model was slightly more vulnerable than the ST-GCN model , while with the HDM05 dataset, the ST-GCN model was extremely vulnerable even for small perturbations.  ... 
arXiv:2109.05830v2 fatcat:xxl2gtjrnncqvep7cyrlnzcrna

Interpolated Joint Space Adversarial Training for Robust and Generalizable Defenses [article]

Chun Pong Lau, Jiang Liu, Hossein Souri, Wei-An Lin, Soheil Feizi, Rama Chellappa
2021 arXiv   pre-print
Our experiments show that Interpolated Joint Space Adversarial Training (IJSAT) achieves good performance in standard accuracy, robustness, and generalization in CIFAR-10/100, OM-ImageNet, and CIFAR-10  ...  Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.  ...  Interpolated Joint Space Adversarial Training Joint Space Attack can be used to harden a classifier against both seen and unseen attacks.  ... 
arXiv:2112.06323v1 fatcat:zclgklsqxrbo7fnuj6c662k4ka

Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [article]

Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
2021 arXiv   pre-print
We then conduct a joint adversarial training on the pre-processing model to minimize this overall risk.  ...  This is due to the adversarial risk of the pre-processed model being neglected, which is another cause of the robustness degradation effect.  ...  To address this issue, we formulate an adversarial risk for the pre-processing model, to exploit the full adversarial examples to improve the inherent robustness of the pre-processing model instead of  ... 
arXiv:2106.05453v1 fatcat:kutbg4vcg5hxli6b4xi7p7itue

On the Robustness of Human Pose Estimation [article]

Sahil Shah, Naman Jain, Abhishek Sharma, Arjun Jain
2021 arXiv   pre-print
Besides, targeted attacks are more difficult to obtain than un-targeted ones and some body-joints are easier to fool than the others.  ...  \par We find that compared to classification and semantic segmentation, human pose estimation architectures are relatively robust to adversarial attacks with the single-step attacks being surprisingly  ...  Due to the conditional joint prediction nature of the architecture that propagates the perturbation in one joint to the rest of the joints, Chained-Prediction turns out to be the least robust among the  ... 
arXiv:1908.06401v2 fatcat:rf5q2gabuvgohhl47yguajqdju

Impact of Spatial Frequency Based Constraints on Adversarial Robustness [article]

Rémi Bernhard, Pierre-Alain Moellic, Martial Mermillod, Yannick Bourrier, Romain Cohendet, Miguel Solinas, Marina Reyboz
2021 arXiv   pre-print
In this paper, we investigate the robustness to adversarial perturbations of models enforced during training to leverage information corresponding to different spatial frequency ranges.  ...  Adversarial examples mainly exploit changes to input pixels to which humans are not sensitive to, and arise from the fact that models make decisions based on uninterpretable features.  ...  InSecTT 2 and by the French National Research Agency (ANR) in the framework of the Investissements d'avenir program (ANR-10-AIRT-05, irtnanoelec) and benefited from the French Jean Zay supercomputer thanks to  ... 
arXiv:2104.12679v2 fatcat:yy5bucbjm5f5hlqg4zg2sjxpii

Robust Regularization with Adversarial Labelling of Perturbed Samples [article]

Xiaohui Guo, Richong Zhang, Yaowei Zheng, Yongyi Mao
2021 arXiv   pre-print
ability and adversarial robustness of the trained model.  ...  Recent researches have suggested that the predictive accuracy of neural network may contend with its adversarial robustness.  ...  We now develop a new regularization scheme that is principled by VRM and exploits the joint space X × Y like MixUp.  ... 
arXiv:2105.13745v1 fatcat:swzynzwj4zdz3g76k3k4mnbk7q

Adversarial training for multi-context joint entity and relation extraction [article]

Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder
2019 arXiv   pre-print
Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data.  ...  We show how to use AT for the tasks of entity recognition and relation extraction.  ...  Acknowledgments We would like to thank the anonymous reviewers for the time and effort they spent in reviewing our work, and for their valuable feedback.  ... 
arXiv:1808.06876v3 fatcat:gv2rgen2hnfjhb44ywntvuqgvq

Likelihood Landscapes: A Unifying Principle Behind Many Adversarial Defenses [article]

Fu Lin, Rohit Mittapalli, Prithvijit Chattopadhyay, Daniel Bolya, Judy Hoffman
2020 arXiv   pre-print
We further explore directly regularizing towards a flat landscape for adversarial robustness.  ...  Convolutional Neural Networks have been shown to be vulnerable to adversarial examples, which are known to locate in subspaces close to where normal data lies but are not naturally occurring and of low  ...  that is robust to a perturbed input [24, 49, 43, 27, 16, 18] .  ... 
arXiv:2008.11300v1 fatcat:3tyiqsishva5np4sdoteo6gbie

Adversarial training for multi-context joint entity and relation extraction

Giannis Bekoulis, Johannes Deleu, Thomas Demeester, Chris Develder
2018 Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing  
Adversarial training (AT) is a regularization method that can be used to improve the robustness of neural network methods by adding small perturbations in the training data.  ...  We show how to use AT for the tasks of entity recognition and relation extraction.  ...  Acknowledgments We would like to thank the anonymous reviewers for the time and effort they spent in reviewing our work, and for their valuable feedback.  ... 
doi:10.18653/v1/d18-1307 dblp:conf/emnlp/BekoulisDDD18 fatcat:pk2fgvnut5a35l6byt3zqgfedi

Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack [article]

He Wang, Feixiang He, Zhexi Peng, Tianjia Shao, Yong-Liang Yang, Kun Zhou, David Hogg
2021 arXiv   pre-print
In this paper, we examine the robustness of state-of-the-art action recognizers against adversarial attack, which has been rarely investigated so far.  ...  To this end, we propose a new method to attack action recognizers that rely on 3D skeletal motion. Our method involves an innovative perceptual loss that ensures the imperceptibility of the attack.  ...  This also suggests that classifiers could use perturbations on the dynamics to make the training more robust, which is complementary to the afore-mentioned suggestion of inducing noises around the perturbation  ... 
arXiv:2103.05347v2 fatcat:tzj2jevedrgntpjyhs4nt6cqoa

Metrics and methods for robustness evaluation of neural networks with generative models [article]

Igor Buzhinsky, Arseny Nerinovsky, Stavros Tripakis
2020 arXiv   pre-print
Many papers have proposed adversarial attacks, defenses and methods to measure robustness to such adversarial perturbations.  ...  In this paper, we propose several metrics to measure robustness of classifiers to natural adversarial examples, and methods to evaluate them.  ...  We thank Ari Heljakka for his help related to the use of the PIONEER generative autoencoder.  ... 
arXiv:2003.01993v2 fatcat:hzw325ahprblzkn64bckscx45m

Mixture of Robust Experts (MoRE):A Robust Denoising Method towards multiple perturbations [article]

Kaidi Xu, Chenan Wang, Hao Cheng, Bhavya Kailkhura, Xue Lin, Ryan Goldhahn
2021 arXiv   pre-print
To generalize the adversarial robustness over different perturbation types, the adversarial training method has been augmented with the improved inner maximization presenting a union of multiple perturbations  ...  To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order  ...  Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or  ... 
arXiv:2104.10586v4 fatcat:bwid53zrs5bkpfxqtklr6ntdly

Adversarial Attacks on Neural Networks for Graph Data

Daniel Zügner, Amir Akbarnejad, Stephan Günnemann
2019 Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence  
Despite their proliferation, currently there is no study of their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common.  ...  To cope with the underlying discrete domain we propose an efficient algorithm Nettack exploiting incremental computations.  ...  How to design efficient algorithms that are able to find adversarial examples in a discrete domain? (2) Adversarial perturbations are aimed to be unnoticeable (by humans).  ... 
doi:10.24963/ijcai.2019/872 dblp:conf/ijcai/ZugnerAG19 fatcat:2hnen4ucjzgmbchp6bgxd7lqzu

Self-supervised Adversarial Training [article]

Kejiang Chen, Hang Zhou, Yuefeng Chen, Xiaofeng Mao, Yuhong Li, Yuan He, Hui Xue, Weiming Zhang, Nenghai Yu
2020 arXiv   pre-print
To escape from the predicament, many works try to harden the model in various ways, in which adversarial training is an effective way which learns robust feature representation so as to resist adversarial  ...  Meanwhile, the self-supervised learning aims to learn robust and semantic embedding from data itself.  ...  Self-supervised Learning Self-supervised learning exploits internal structures of data and formulates predictive tasks to train a model, which can be seen as learning the robust feature.  ... 
arXiv:1911.06470v2 fatcat:4hvazvht7rdnldvjjaoomqzwcy
« Previous Showing results 1 — 15 out of 3,517 results