312 Hits in 9.8 sec

A simple way to make neural networks robust against diverse image corruptions [article]

Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel
2020 arXiv   pre-print
The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.  ...  We build on top of these strong baseline results and show that an adversarial training of the recognition model against uncorrelated worst-case noise distributions leads to an additional increase in performance  ...  Oliver Bringmann and Evgenia Rusak have been partially supported by the Deutsche Forschungsgemeinschaft (DFG) in the priority program 1835 "Cooperatively Interacting Automobiles" under grant BR2321/5-1  ... 
arXiv:2001.06057v5 fatcat:iuxisdehcnfrhdtedyp6ougc34

Adversarial Machine Learning for Cybersecurity and Computer Vision: Current Developments and Challenges [article]

Bowei Xi
2021 arXiv   pre-print
This further complicates the development of robust learning techniques, because a robust learning technique must withstand different types of attacks.  ...  For example, deep neural networks fail to correctly classify adversarial images, which are generated by adding imperceptible perturbations to clean images.We first discuss three main categories of attacks  ...  The defender played a mixed equilibrium strategy, which can be found by solving multiple singleleader-single-follower games with probabilities determined by the Bayesian Stackelberg game.  ... 
arXiv:2107.02894v1 fatcat:ir7vzxh3wfaddcmgezqtyxu7iy

Robust Vision-Based Cheat Detection in Competitive Gaming [article]

Aditya Jonnalagadda, Iuri Frosio, Seth Schneider, Morgan McGuire, Joohwan Kim
2021 arXiv   pre-print
We study the advantages and disadvantages of different DNN architectures operating on a local or global scale.  ...  Our results show that robust and effective anti-cheating through machine learning is practically feasible and can be used to guarantee fair play in online gaming.  ...  HaarPSI value is computed for the adversarial image against an uncorrupted original image. As expected, the image quality decays significantly with an increase in .  ... 
arXiv:2103.10031v2 fatcat:ohc6wmkewnb3xbw2rhwabgn264

Recent Advances in Understanding Adversarial Robustness of Deep Neural Networks [article]

Tao Bai, Jinqi Luo, Jun Zhao
2020 arXiv   pre-print
Adversarial examples are inevitable on the road of pervasive applications of deep neural networks (DNN).  ...  We then provide an overview on analyzing correlations among adversarial robustness and other critical indicators of DNN models.  ...  ., 2015; is recognized the most effective way to gain adversarial robustness in practice, where the neural networks are forced to play a min-max game.  ... 
arXiv:2011.01539v1 fatcat:e3o47epftbc2rebpdx5yotzriy

Adversarially robust segmentation models learn perceptually-aligned gradients [article]

Pedro Sandoval-Segura
2022 arXiv   pre-print
The effects of adversarial training on semantic segmentation networks has not been thoroughly explored.  ...  We seek to place additional weight behind the hypothesis that adversarially robust models exhibit gradients that are more perceptually-aligned with human vision.  ...  Jacobs, and Kfir Aberman for helpful discussions throughout the period of this work.  ... 
arXiv:2204.01099v1 fatcat:wn3fdshdbna4xho3xmnknkv65y

Adversarial Examples - A Complete Characterisation of the Phenomenon [article]

Alexandru Constantin Serban, Erik Poll, Joost Visser
2019 arXiv   pre-print
We aim to cover all the important concerns in this field of study: (1) the conjectures on the existence of adversarial examples, (2) the security, safety and robustness implications, (3) the methods used  ...  to generate and (4) protect against adversarial examples and (5) the ability of adversarial examples to transfer between different machine learning models.  ...  Fitness is determined by sending the image to a DNN.  ... 
arXiv:1810.01185v2 fatcat:ybtxdm7refakxfyec2wjonzehu

Exploring and Improving Robustness of Multi Task Deep Neural Networks via Domain Agnostic Defenses [article]

Kashyap Coimbatore Murali
2020 arXiv   pre-print
In this paper, we explore the robustness of the Multi-Task Deep Neural Networks (MT-DNN) against non-targeted adversarial attacks across Natural Language Understanding (NLU) tasks as well as some possible  ...  ) the accuracy drops by 42.05% and 32.24% for the SNLI and SciTail tasks.  ...  After it was proven that image classifying neural networks can be fooled to predict incorrect classes by adding some noise that was unidentifiable by humans [19] , the focus began to shift on how NLU  ... 
arXiv:2001.05286v1 fatcat:4ttls5pkcraatclu3ycofdr7ka

Deep Learning for Environmentally Robust Speech Recognition: An Overview of Recent Developments [article]

Zixing Zhang, Jürgen Geiger, Jouni Pohjalainen, Amr El-Desoky Mousa, Wenyu Jin, Björn Schuller
2018 arXiv   pre-print
those involved in the development of environmentally robust speech recognition systems.  ...  Eliminating the negative effect of non-stationary environmental noise is a long-standing research topic for automatic speech recognition that stills remains an important challenge.  ...  ACKNOWLEDGEMENTS This work was supported by Huawei Technologies Co. Ltd.  ... 
arXiv:1705.10874v3 fatcat:evdhqnj7eraa5jiolakuf4mf3e

A Brief Survey on Deep Learning Based Data Hiding [article]

Chaoning Zhang, Chenguo Lin, Philipp Benz, Kejiang Chen, Weiming Zhang, In So Kweon
2022 arXiv   pre-print
Finally, further insight into deep hiding is provided by incorporating the perspective of adversarial attack.  ...  Data hiding is the art of concealing messages with limited perceptual changes. Recently, deep learning has enriched it from various perspectives with significant progress.  ...  The effect of adversarial training on the robustness against common corruptions has been investigated in [Luo et al., 2020] , which shows that it improves the robustness against noise-type perturbation  ... 
arXiv:2103.01607v2 fatcat:z4kyyy234vgltp3kpdhq5h5wsu

Adversarial Machine Learning And Speech Emotion Recognition: Utilizing Generative Adversarial Networks For Robustness [article]

Siddique Latif, Rajib Rana, Junaid Qadir
2018 arXiv   pre-print
However, recent research on adversarial examples poses enormous challenges on the robustness of SER systems by showing the susceptibility of deep neural networks to adversarial examples as they rely only  ...  Experimental evaluations suggest various interesting aspects of the effective utilization of adversarial examples useful for achieving robustness for SER systems opening up opportunities for researchers  ...  We explore this phenomenon by mixing adversarial examples with training data to highlight the robustness of model against attack.  ... 
arXiv:1811.11402v2 fatcat:ykjjg43e2rb7lkbxidv72o7uqq

Where Classification Fails, Interpretation Rises [article]

Chanh Nguyen, Georgi Georgiev, Yujie Ji, Ting Wang
2017 arXiv   pre-print
In this work, we take a completely different route by leveraging the definition of adversarial inputs: while deceiving for deep neural networks, they are barely discernible for human visions.  ...  We validate the efficacy of this framework through extensive experiments using benchmark datasets and attacks.  ...  Attention mask m determines the important components of x that influence the classification output of a classifier f by corrupting pixels of x with noise drawn from a predefined distribution and measures  ... 
arXiv:1712.00558v1 fatcat:7dfqzmjbfjfcja2smie3csd2ym

Policy Smoothing for Provably Robust Reinforcement Learning [article]

Aounon Kumar, Alexander Levine, Soheil Feizi
2022 arXiv   pre-print
The study of provable adversarial robustness for deep neural networks (DNNs) has mainly focused on static supervised learning tasks such as image classification.  ...  Prior works in provable robustness in RL seek to certify the behaviour of the victim policy at every time-step against a non-adaptive adversary using methods developed for the static setting.  ...  ACKNOWLEDGEMENTS This project was supported in part by NSF CAREER AWARD 1942230, a grant from NIST 60NANB20D134, HR001119S0026-GARD-FP-052, HR00112090132, ONR YIP award N00014-22-1-2271, Army Grant W911NF2120076  ... 
arXiv:2106.11420v3 fatcat:toalxmperncqbi4sswsrmkkpqu

Adversarial Examples on Object Recognition: A Comprehensive Survey [article]

Alex Serban, Erik Poll, Joost Visser
2020 arXiv   pre-print
We start by introducing the hypotheses behind their existence, the methods used to construct or protect against them, and the capacity to transfer adversarial examples between different machine learning  ...  In this article we discuss the impact of adversarial examples on security, safety, and robustness of neural networks.  ...  Moreover, some practical experiments are not covered in detail, e.g. deploying adversarial examples in the physical world by printing corrupted images [46, 97] , altering the image acquisition device  ... 
arXiv:2008.04094v2 fatcat:7xycyybhpvhshawt7fy3fzeana

A survey of deep neural network architectures and their applications

Weibo Liu, Zidong Wang, Xiaohui Liu, Nianyin Zeng, Yurong Liu, Fuad E. Alsaadi
2017 Neurocomputing  
In 2012, the research group led by Hinton won the competition of ImageNet Image Classification by using deep learning approaches [86] .  ...  In March 2016, a Go Game match was held in South Korea by Google's deep learning project (called DeepMind) between their AI player AlphaGo and one of the world's strongest players Lee Se-dol [140] .  ...  As mentioned in [17] , pooling is used to obtain invariance in image transformations. This process will lead to better robustness against noise.  ... 
doi:10.1016/j.neucom.2016.12.038 fatcat:nkxvbhp47rfflpi5jev7hk4yq4

Deep Representation Learning in Speech Processing: Challenges, Recent Advances, and Future Trends [article]

Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, Junaid Qadir, Björn W. Schuller
2021 arXiv   pre-print
The main contribution of this paper is to present an up-to-date and comprehensive survey on different techniques of speech representation learning by bringing together the scattered research across three  ...  The significance of representation learning has increased with advances in deep learning (DL), where the representations are more useful and less dependent on human knowledge, making it very conducive  ...  They can learn high-level representations from speech that are robust to noise corruption.  ... 
arXiv:2001.00378v2 fatcat:ysvljxylwnajrbowd3kfc7l6ve
« Previous Showing results 1 — 15 out of 312 results