Filters








4,434 Hits in 5.6 sec

A simple way to make neural networks robust against diverse image corruptions [article]

Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel
2020 arXiv   pre-print
The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.  ...  Here, we demonstrate that a simple but properly tuned training with additive Gaussian and Speckle noise generalizes surprisingly well to unseen corruptions, easily reaching the previous state of the art  ...  Furthermore, augmentation methods have also been applied to make the models more robust against image corruptions .  ... 
arXiv:2001.06057v5 fatcat:iuxisdehcnfrhdtedyp6ougc34

Combining Different V1 Brain Model Variants to Improve Robustness to Image Corruptions in CNNs [article]

Avinash Baidya, Joel Dapello, James J. DiCarlo, Tiago Marques
2021 arXiv   pre-print
While some convolutional neural networks (CNNs) have surpassed human visual abilities in object classification, they often struggle to recognize objects in images corrupted with different types of common  ...  Recently, it has been shown that simulating a primary visual cortex (V1) at the front of CNNs leads to small improvements in robustness to these image perturbations.  ...  “A simple way to make neural networks robust against diverse image corruptions”. In: (2020), pp. 1–34. URL: http://arxiv.org/abs/2001.06057. [12] Joel Dapello et al.  ... 
arXiv:2110.10645v2 fatcat:7zi7yskvz5dttjviyulzdbxxne

PRIME: A few primitives can boost robustness to common corruptions [article]

Apostolos Modas, Rahul Rade, Guillermo Ortiz-Jiménez, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
2022 arXiv   pre-print
Despite their impressive performance on image classification tasks, deep networks have a hard time generalizing to unforeseen corruptions of their data.  ...  In this work, we take a step back and follow a principled approach to achieve robustness to common corruptions.  ...  This work has been partially supported by the CHIST-ERA program under Swiss NSF Grant 20CH21 180444, and partially by Google via a Postdoctoral Fellowship and a GCP Research Credit Award.  ... 
arXiv:2112.13547v2 fatcat:hommnaqv6fdj3odve76t7vhpsy

Learning Loss for Test-Time Augmentation [article]

Ildoo Kim, Younghoon Kim, Sungwoong Kim
2020 arXiv   pre-print
Experimental results on several image classification benchmarks show that the proposed instance-aware test-time augmentation improves the model's robustness against various corruptions.  ...  Data augmentation has been actively studied for robust neural networks. Most of the recent data augmentation methods focus on augmenting datasets during the training phase.  ...  Robustness in Convolutional Neural Network: A convolutional neural network is vulnerable to simple corruption. This vulnerability has been studied in several works.  ... 
arXiv:2010.11422v1 fatcat:kcpa7l42jvamxiiycbvvwbiove

Robustness via Cross-Domain Ensembles [article]

Teresa Yeo, Oğuzhan Fatih Kar, Alexander Sax, Amir Zamir
2021 arXiv   pre-print
We present a method for making neural network predictions robust to shifts from the training data distribution.  ...  The proposed method is based on making predictions via a diverse set of cues (called 'middle domains') and ensembling them into one strong prediction.  ...  This indicates that using middle domains promotes ensemble diversity in a way that makes it more challenging to create one attack that fools all paths simultaneously, hence this approach can be a promising  ... 
arXiv:2103.10919v2 fatcat:eegzeizkzrgxdpdjrw7qtnexeu

Diverse Gaussian Noise Consistency Regularization for Robustness and Uncertainty Calibration [article]

Theodoros Tsiligkaridis, Athanasios Tsiligkaridis
2022 arXiv   pre-print
We show that this simple approach improves robustness against various unforeseen noise corruptions by 4.2-18.4\% over adversarial training and other strong diverse data augmentation baselines across several  ...  In this paper, we propose a diverse Gaussian noise consistency regularization method for improving robustness of image classifiers under a variety of noise corruptions while still maintaining high clean  ...  However, robustness to noise corruptions remains a challenge.  ... 
arXiv:2104.01231v4 fatcat:jlgknfdfkfhyzf7uab5z2bhvri

Out of Distribution Detection and Adversarial Attacks on Deep Neural Networks for Robust Medical Image Analysis [article]

Anisie Uwimana1, Ransalu Senanayake
2021 arXiv   pre-print
For instance, the state-of-the-art Convolutional Neural Networks (CNNs) fail to detect adversarial samples or samples drawn statistically far away from the training distribution.  ...  Deep learning models have become a popular choice for medical image analysis.  ...  Liang et al. (2018) proposed ODIN (Out-of-DIstribution detector for Neural networks) which is a simple and effective method for detecting OOD images in neural networks.  ... 
arXiv:2107.04882v1 fatcat:4uf4gxmmtzf5rg7jrj2gfelnxa

Streaming Networks: Increase Noise Robustness and Filter Diversity via Hard-wired and Input-induced Sparsity [article]

Sergey Tarasenko, Fumihiko Takahashi
2020 arXiv   pre-print
We focus on the problem of robust recognition accuracy of noise-corrupted images. We introduce a novel network architecture called Streaming Networks.  ...  Finally, to illustrate increase in filter diversity we illustrate that a distribution of filter weights of the first conv layer gradually approaches uniform distribution as the degree of hard-wired and  ...  The gist of sparsity control for network training is to make neurons to be activated with a certain frequency. Thus, providing a diversity of paths within a neural network.  ... 
arXiv:2004.03334v2 fatcat:rrn6wnghafbubdcygqet55ngai

AugMax: Adversarial Composition of Random Augmentations for Robust Training [article]

Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Anima Anandkumar, Zhangyang Wang
2022 arXiv   pre-print
Data augmentation is a simple yet effective way to improve the robustness of deep neural networks (DNNs).  ...  Diversity and hardness are two complementary dimensions of data augmentation to achieve robustness.  ...  [12] , pre-training [13] [14] [15] [16] , and robust network structures [17] [18] [19] , to name a few.  ... 
arXiv:2110.13771v3 fatcat:e7ulbwviprhyzpoyt72m2tlzem

Training Robust Deep Neural Networks via Adversarial Noise Propagation [article]

Aishan Liu, Xianglong Liu, Chongzhi Zhang, Hang Yu, Qiang Liu and Junfeng He
2019 arXiv   pre-print
Deep neural networks have been found vulnerable to noises like adversarial examples and corruption in practice.  ...  Motivated by the fact that hidden layers play a very important role in maintaining a robust model, this paper comes up with a simple yet powerful training algorithm named Adversarial Noise Propagation  ...  Goodfellow, Shlens, and Szegedy proposes FGSM as a simple way to generate adversarial examples.  ... 
arXiv:1909.09034v1 fatcat:lgtngb3vjbhcvdj75qe5in4obm

Pixel to Binary Embedding Towards Robustness for CNNs [article]

Ikki Kishida, Hideki Nakayama
2022 arXiv   pre-print
There are several problems with the robustness of Convolutional Neural Networks (CNNs).  ...  P2BE outperforms other binary embedding methods in robustness against adversarial perturbations and visual corruptions that are not shown during training.  ...  It implies that designing a sophisticated input space may be a promising way to improve robustness against never-seen visual corruptions.  ... 
arXiv:2206.05898v1 fatcat:we6si3pagbhe5pfokjeg2l7sni

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty [article]

Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan
2020 arXiv   pre-print
In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers.  ...  We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions.  ...  A popular way to make networks robust to p adversarial examples is with adversarial training (Madry et al., 2018) , which we use in this paper.  ... 
arXiv:1912.02781v2 fatcat:s5a7xpmjwjayphmrzqexkxzyni

VCNet: A Robust Approach to Blind Image Inpainting [article]

Yi Wang, Ying-Cong Chen, Xin Tao, Jiaya Jia
2020 arXiv   pre-print
In this paper, we relax the assumption by defining a new blind inpainting setting, making training a blind inpainting neural system robust against various unknown missing region patterns.  ...  Specifically, we propose a two-stage visual consistency network (VCN), meant to estimate where to fill (via masks) and generate what to fill.  ...  Setting N as a constant value or certain kind of noise makes it and M easy to be distinguished by a deep neural net or even a simple linear classifier from a natural image patch.  ... 
arXiv:2003.06816v1 fatcat:2755gwvvizfuhjtgkkkx4fijtu

Noisy Learning for Neural ODEs Acts as a Robustness Locus Widening [article]

Martin Gonzalez, Hatem Hajri, Loic Cantat, Mihaly Petreczky
2022 arXiv   pre-print
We then use this criteria to evaluate a cheap data augmentation technique as a reliable way for demonstrating the natural robustness of neural ODEs against simulated image corruptions across multiple datasets  ...  We propose a novel and simple accuracy metric which can be used to evaluate intrinsic robustness and to validate dataset corruption simulators.  ...  Acknowledgements The authors thank to each other for the fruitful conversations that led to the project for which this paper consists on a preliminary work.  ... 
arXiv:2206.08237v1 fatcat:mgdnaltehfferjzgfazuxuluby

On 1/n neural representation and robustness [article]

Josue Nassar, Piotr Aleksander Sokol, SueYeon Chung, Kenneth D. Harris, Il Memming Park
2020 arXiv   pre-print
Our results show that imposing the experimentally observed structure on artificial neural networks makes them more robust to adversarial attacks.  ...  A pressing question in these areas is understanding how the structure of the representation used by neural networks affects both their generalization, and robustness to perturbations.  ...  A simple way to enforce a power-law decay without changing its architecture is to use the finite-dimensional embedding and directly regularize the eigenspectrum of the neural representation used at layer  ... 
arXiv:2012.04729v1 fatcat:64n46oqxdzg63ouzhkmiscdx2u
« Previous Showing results 1 — 15 out of 4,434 results