Filters








5,077 Hits in 6.3 sec

Achieving Generalizable Robustness of Deep Neural Networks by Stability Training [chapter]

Jan Laermann, Wojciech Samek, Nils Strodthoff
2019 Lecture Notes in Computer Science  
We study the recently introduced stability training as a general-purpose method to increase the robustness of deep neural networks against input perturbations.  ...  robustness against a broader range of distortion strengths and types unseen during training, a considerably smaller hyperparameter dependence and less potentially negative side effects compared to data  ...  Stability Training Stability training aims to stabilize predictions of a deep neural network in response to small input distortions.  ... 
doi:10.1007/978-3-030-33676-9_25 fatcat:3h7sto7zqbb2bbg4rrietnxvn4

Deep Learning Models for Fast Ultrasound Localization Microscopy

Jihwan Youn, Ben Luijten, Matthias Bo Stuart, Yonina C. Eldar, Ruud J. G. van Sloun, Jorgen Arendt Jensen
2020 2020 IEEE International Ultrasonics Symposium (IUS)  
In this work, a data-driven encoderdecoder convolutional neural network (deep-ULM) and a modelbased deep unfolded network embedding a sparsity prior (deep unfolded ULM) are analyzed in terms of localization  ...  Additionally, thanks to its model-based approach, deep unfolded ULM needed much fewer learning parameters and was computationally more efficient, and consequently achieved better generalizability than  ...  Smoothing was applied to the true MB positions to provide larger gradients to ensure training stability. 1) Deep-ULM: Deep-ULM uses an encoder-decoder convolutional neural network (CNN), which is widely  ... 
doi:10.1109/ius46767.2020.9251561 fatcat:pex22aht6bfmre53mw5a5kklky

Learning robust and high-precision quantum controls

Re-Bing Wu, Haijin Ding, Daoyi Dong, Xiaoting Wang
2019 Physical Review A  
The seeking of robust quantum controls is then equivalent to training a highly generalizable NN, to which numerous tuning skills matured in machine learning can be transferred.  ...  In this paper, we show that this hard problem can be translated to a supervised machine learning task by treating the time-ordered quantum evolution as a layer-ordered neural network (NN).  ...  We find that the search for robust quantum controls can be elegantly mapped to the training of a deep neural network (DNN), and enormous powerful techniques developed for the latter in the deep learning  ... 
doi:10.1103/physreva.99.042327 fatcat:jofkfzxwxrdatn4zim2fsjfff4

Automated Sleep Stage Scoring of the Sleep Heart Health Study Using Deep Neural Networks

Linda Zhang, Daniel Fabbri, Raghu Upender, David Kent
2019 Sleep  
Deep learning is a form of machine learning that uses neural networks to recognize data patterns by inspecting many examples rather than by following explicit programming.  ...  Results The optimal neural network model was composed of spectrograms in the input layer feeding into CNN layers and an LSTM layer to achieve a weighted F1-score of 0.87 and K = 0.82.  ...  Noisy input data are hypothesized to improve the robustness of deep learning models by stabilizing against distortions in the input [54] .  ... 
doi:10.1093/sleep/zsz159 pmid:31289828 pmcid:PMC6802563 fatcat:xro7fohjufcspdrbdpammkpckm

Are Neural Ranking Models Robust? [article]

Chen Wu, Ruqing Zhang, Jiafeng Guo, Yixing Fan, Xueqi Cheng
2022 arXiv   pre-print
This is the first comprehensive study on the robustness of neural ranking models.  ...  While neural ranking models are less robust against other IR models in most cases, some of them can still win 1 out of 5 tasks.  ...  They measured the distance based on deep features extracted from the deep networks. The target data which was found by their method was then iteratively added to the training data.  ... 
arXiv:2108.05018v4 fatcat:ke6krrxupjadljka4sgjvcnbj4

Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [article]

Shiye Lei, Zhuozhuo Tu, Leszek Rutkowski, Feng Zhou, Li Shen, Fengxiang He, Dacheng Tao
2021 arXiv   pre-print
models: (1) first normally train a neural network from scratch to realize fast training; and (2) the first layer is converted to Bayesian and inferred by employing stochastic variational inference, while  ...  Bayesian neural networks (BNNs) have become a principal approach to alleviate overconfident predictions in deep learning, but they often suffer from scaling issues due to a large number of distribution  ...  The experiment reveals the large disparity of layer stability among different layers in deep neural networks.  ... 
arXiv:2112.06281v1 fatcat:7wpry2bbrraqtcldvj5w2j4bim

How Does Noise Help Robustness? Explanation and Exploration under the Neural SDE Framework

Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
We provide some theoretical analyses explaining the improved robustness of our models against input perturbations.  ...  Furthermore, we demonstrate that the Neural SDE network can achieve better generalization than the Neural ODE and is more resistant to adversarial and non-adversarial input perturbations.  ...  Acknowledgement This work is partially supported by NSF under IIS1719097.  ... 
doi:10.1109/cvpr42600.2020.00036 dblp:conf/cvpr/LiuXSCKH20 fatcat:2kpebczdbbfp7jvfy3gafotdpi

A Self-supervised Approach for Adversarial Robustness

Muzammal Naseer, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, Fatih Porikli
2020 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)  
Adversarial training that enhances robustness by modifying target model's parameters lacks such generalizability.  ...  Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems e.g., for classification, segmentation and object detection.  ...  We wish to remove the adversarial patterns by training a neural network P θ parameterized by θ, which we refer as the purifier network.  ... 
doi:10.1109/cvpr42600.2020.00034 dblp:conf/cvpr/NaseerKHKP20 fatcat:6xxnsydh3vewrmhamfysrobogu

Batch Group Normalization [article]

Xiao-Yun Zhou, Jiacheng Sun, Nanyang Ye, Xu Lan, Qijun Luo, Bo-Lin Lai, Pedro Esperanca, Guang-Zhong Yang, Zhenguo Li
2020 arXiv   pre-print
Deep Convolutional Neural Networks (DCNNs) are hard and time-consuming to train. Normalization is one of the effective solutions.  ...  For example, for training ResNet-50 on ImageNet with a batch size of 2, BN achieves Top1 accuracy of 66.512% while BGN achieves 76.096% with notable improvement.  ...  Introduction Since AlexNet was proposed in (Krizhevsky, Sutskever, and Hinton 2012) , Deep Convolutional Neural Network (DCNN) has been a popular method for vision tasks including image classification  ... 
arXiv:2012.02782v2 fatcat:anuqp4fo45bnnf47vql7n4qpce

Are Neural Ranking Models Robust?

Chen Wu, Ruqing Zhang
2022 ACM Transactions on Information Systems  
While neural ranking models are less robust against other IR models in most cases, some of them can still win 2 out of 5 tasks.  ...  ; 2) The out-of-distribution (OOD) generalizability ; and 3) The defensive ability against adversarial operations.  ...  by neural networks.  ... 
doi:10.1145/3534928 fatcat:x2apj3lrmjejvkdsgqjjtnptky

Generalization Error in Deep Learning [article]

Daniel Jakubovitz, Raja Giryes, Miguel R. D. Rodrigues
2019 arXiv   pre-print
Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data.  ...  In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical  ...  Deep neural networks can achieve a zero training error even when trained on a random labeling of the training data, meaning deep neural networks can easily fit random labels, which is indicative of very  ... 
arXiv:1808.01174v3 fatcat:yjem7ahdhbfg5glo2liadysrje

Towards Robust Deep Neural Networks with BANG [article]

Andras Rozsa, Manuel Gunther, Terrance E. Boult
2018 arXiv   pre-print
Machine learning models, including state-of-the-art deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors.  ...  In this paper, we present a novel theory to explain why this unpleasant phenomenon exists in deep neural networks.  ...  Acknowledgments This research is based upon work funded in part by NSF IIS-1320956 and in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity  ... 
arXiv:1612.00138v3 fatcat:kl4lquqs7fbbrju2rfoowpxese

MicronNet: A Highly Compact Deep Convolutional Neural Network Architecture for Real-time Embedded Traffic Sign Classification [article]

Alexander Wong, Mohammad Javad Shafiee, Michael St. Jules
2018 arXiv   pre-print
While deep neural networks have been demonstrated in recent years to provide state-of-the-art performance traffic sign recognition, a key challenge for enabling the widespread deployment of deep neural  ...  of the proposed network.  ...  Acknowledgment The authors thank the Natural Sciences and Engineering Research Council of Canada, Canada Research Chairs Program, and DarwinAI, as well as Nvidia for hardware support.  ... 
arXiv:1804.00497v3 fatcat:aduplkpw2vbzpncdbdjwfgoq4a

Physics-Guided Deep Learning for Dynamical Systems: A Survey [article]

Rui Wang, Rose Yu
2022 arXiv   pre-print
While deep learning (DL) provides novel alternatives for efficiently recognizing complex patterns and emulating nonlinear dynamics, its predictions do not necessarily obey the governing laws of physical  ...  Thus, the study of physics-guided DL emerged and has gained great progress.  ...  a deep neural network, and enforcing the governing equations as soft constraint on the output of the neural nets during training at the same time [114; 113; 76] .  ... 
arXiv:2107.01272v5 fatcat:k6hhdt6csnfebgkzrpuoeqkwzi

A Fully Spiking Hybrid Neural Network for Energy-Efficient Object Detection [article]

Biswadeep Chakraborty, Xueyuan She, Saibal Mukhopadhyay
2021 arXiv   pre-print
This paper proposes a Fully Spiking Hybrid Neural Network (FSHNN) for energy-efficient and robust object detection in resource-constrained platforms.  ...  It also outperforms these object detectors, when subjected to noisy input data and less labeled training data with a lower uncertainty error.  ...  This helps the FSHNN network achieve high performance as well as robustness against input noise and less labeled data for training.  ... 
arXiv:2104.10719v2 fatcat:cczszusuejg4hepmyushgowlwa
« Previous Showing results 1 — 15 out of 5,077 results