Filters








77 Hits in 2.7 sec

On the distance between two neural networks and the stability of learning [article]

Jeremy Bernstein, Arash Vahdat, Yisong Yue, Ming-Yu Liu
2021 arXiv   pre-print
Since the resulting learning rule seems to require little to no learning rate tuning, it may unlock a simpler workflow for training deeper and more complex neural networks.  ...  Class-conditional generative adversarial network training We train a class-conditional generative adversarial network with projection discriminator [33, 37] on the CIFAR-10 dataset [41] .  ...  Generative adversarial learning [32] trains a discriminator network D to classify data as real or fake, and a generator network G is trained to fool D. Competition drives learning in both networks.  ... 
arXiv:2002.03432v3 fatcat:uf6uovk4kjgr3db2vqjl5mje6e

A case for new neural network smoothness constraints [article]

Mihaela Rosca, Theophane Weber, Arthur Gretton, Shakir Mohamed
2021 arXiv   pre-print
We tackle the question of model smoothness and show that it is a useful inductive bias which aids generalization, adversarial robustness, generative modeling and reinforcement learning.  ...  How sensitive should machine learning models be to input changes?  ...  -layerwise regularization applied to the entire space.  ... 
arXiv:2012.07969v3 fatcat:pa6ka7vtdfgz3d3czf5hmlgkb4

Deep Learning Assisted Predict of Lung Cancer on Computed Tomography Images using the Adaptive Hierarchical Heuristic Mathematical Model

Heng Yu, Zhiqing Zhou, Qiming Wang
2020 IEEE Access  
In this paper, the Adaptive Hierarchical Heuristic Mathematical Model (AHHMM) has been proposed for the deep learning approach.  ...  INDEX TERMS Lung cancer detection, deep learning, deep neural network, mathematical model. 86400 This work is licensed under a Creative Commons Attribution 4.0 License.  ...  The measured value is compared with the extracted and trained features for cancer classification based on deep learning.  ... 
doi:10.1109/access.2020.2992645 fatcat:xu7fgjwd7rallkx4hodxn4va5e

Anomalous Example Detection in Deep Learning: A Survey [article]

Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, Dawn Song
2021 arXiv   pre-print
Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs.  ...  The model employs a greedy layerwise training operation for unsupervised feature learning and supervised parameter tuning.  ...  The motivation is that while it is difficult to model every variant of anomaly distribution, one can learn effective heuristics for detecting OOD samples by exposing the model to diverse OOD datasets.  ... 
arXiv:2003.06979v2 fatcat:4mogo75b4rbxrc6vph2xmllkue

Anomalous Example Detection in Deep Learning: A Survey

Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, Dawn Song
2020 IEEE Access  
Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs.  ...  The model employs a greedy layerwise training operation for unsupervised feature learning and supervised parameter tuning.  ...  The motivation is that while it is difficult to model every variant of anomaly distribution, one can learn effective heuristics for detecting OOD samples by exposing the model to diverse OOD datasets.  ... 
doi:10.1109/access.2020.3010274 fatcat:3xjpfc64nvcbtfpwtwwbitjuvm

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers [article]

Chen Zhu, Renkun Ni, Ping-yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein
2020 arXiv   pre-print
Convex relaxations are effective for training and certifying neural networks against norm-bounded adversarial attacks, but they leave a large gap between certifiable and empirical robustness.  ...  We propose two regularizers that can be used to train neural networks that yield tighter convex relaxation bounds for robustness.  ...  In addition, adversarial training uses adversarial examples as opposed to clean examples during training, so that the network can learn how to classify arXiv:2002.09766v1 [cs.LG] 22 Feb 2020 adversarial  ... 
arXiv:2002.09766v1 fatcat:yli2k7rcpfgfvnw57b7r6eebvq

PnP-AdaNet: Plug-and-Play Adversarial Domain Adaptation Network with a Benchmark at Cross-modality Cardiac Segmentation [article]

Qi Dou, Cheng Ouyang, Cheng Chen, Hao Chen, Ben Glocker, Xiahai Zhuang, Pheng-Ann Heng
2018 arXiv   pre-print
With adversarial learning, we build two discriminators whose inputs are respectively multi-level features and predicted segmentation masks.  ...  However, the generalization capability of deep models on test data with different distributions remain as a major challenge.  ...  Loss Functions and Training Strategies In adversarial learning, the DAM is pitted against an adversary with the above two discriminators.  ... 
arXiv:1812.07907v1 fatcat:muziwawywja45cjfhrpwqr7bwm

Rethinking Reconstruction Autoencoder-Based Out-of-Distribution Detection [article]

Yibo Zhou
2022 arXiv   pre-print
In some scenarios, classifier requires detecting out-of-distribution samples far from its training data.  ...  With desirable characteristics, reconstruction autoencoder-based methods deal with this problem by using input reconstruction error as a metric of novelty vs. normality.  ...  Many existing methods rely on training or tuning with data labelled as OoD from other categories [28, 32] , adversaries [15, 19] or the leave-out sub-set of training samples [30] .  ... 
arXiv:2203.02194v2 fatcat:hh7aurrw2relrequbowxekq65m

Learning by Turning: Neural Architecture Aware Optimisation [article]

Yang Liu, Jeremy Bernstein, Markus Meister, Yisong Yue
2021 arXiv   pre-print
Nero trains reliably without momentum or weight decay, works in situations where Adam and SGD fail, and requires little to no learning rate tuning.  ...  The paper concludes by discussing how this geometric connection between architecture and optimisation may impact theories of generalisation in deep learning.  ...  The Adam optimiser was used for training with a fixed learning rate of 0.01.  ... 
arXiv:2102.07227v2 fatcat:lprb6bbefvchtov4cwsyccdcy4

To Relieve Your Headache of Training an MRF, Take AdVIL [article]

Chongxuan Li, Chao Du, Kun Xu, Max Welling, Jun Zhu, Bo Zhang
2020 arXiv   pre-print
We propose a black-box algorithm called Adversarial Variational Inference and Learning (AdVIL) to perform inference and learning on a general Markov random field (MRF).  ...  On one hand, compared with contrastive divergence, AdVIL requires a minimal assumption about the model structure and can deal with a broader family of MRFs.  ...  The layerwise structure potentially benefits the training of both methods.  ... 
arXiv:1901.08400v3 fatcat:bao2bnestberbcbxi2qrsapq2q

A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability [article]

Xiaowei Huang and Daniel Kroening and Wenjie Ruan and James Sharp and Youcheng Sun and Emese Thamo and Min Wu and Xinping Yi
2020 arXiv   pre-print
Research to address these concerns is particularly active, with a significant number of papers released in the past few years.  ...  This survey paper conducts a review of the current research effort into making DNNs safe and trustworthy, by focusing on four aspects: verification, testing, adversarial attack and defence, and interpretability  ...  ., 2018] introduces ensemble adversarial training, which augments training data with perturbations transferred from other models.  ... 
arXiv:1812.08342v5 fatcat:awndtbca4jbi3pcz5y2d4ymoja

A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability

Xiaowei Huang, Daniel Kroening, Wenjie Ruan, James Sharp, Youcheng Sun, Emese Thamo, Min Wu, Xinping Yi
2020 Computer Science Review  
., 2018] introduces ensemble adversarial training, which augments training data with perturbations transferred from other models.  ...  Please note that, DeepFool is a heuristic algorithm for a neural network classifier that provides no guarantee to find the adversarial image with the minimum distortion, but in practise it is an very effective  ... 
doi:10.1016/j.cosrev.2020.100270 fatcat:biji56htvnglfhl7n3jnuelu2i

Deep Neural Mobile Networking [article]

Chaoyun Zhang
2020 arXiv   pre-print
This makes monitoring and managing the multitude of network elements intractable with existing tools and impractical for traditional machine learning algorithms that rely on hand-crafted feature engineering  ...  In particular, deep learning based solutions can automatically extract features from raw data, without human expertise.  ...  with existing theoretical methods or heuristics.  ... 
arXiv:2011.05267v1 fatcat:yz2zp5hplzfy7h5kptmho7mbhe

Salient Object Detection Techniques in Computer Vision—A Survey

Ashish Kumar Gupta, Ayan Seal, Mukesh Prasad, Pritee Khanna
2020 Entropy  
Relevant saliency modeling trends with key issues, core techniques, and the scope for future research work have been discussed in the context of difficulties often faced in salient object detection.  ...  These methods can be broadly categorized into two categories based on their feature engineering mechanism: conventional or deep learning-based.  ...  Adversarial Training Based Models The Generative Adversarial Networks (GANs) has gained a lot of attention from researchers in fields such as image generation [212] , image super-resolution [213] and  ... 
doi:10.3390/e22101174 pmid:33286942 pmcid:PMC7597345 fatcat:3p5d2nal4vhxbi2via3g7oicga

A Low Effort Approach to Structured CNN Design Using PCA [article]

Isha Garg, Priyadarshini Panda, Kaushik Roy
2019 arXiv   pre-print
Deep learning models hold state of the art performance in many fields, yet their design is still based on heuristics or grid search methods.  ...  Model compression is an active field of research that targets the problem of realizing deep learning models in hardware.  ...  Along with these differences, to the best of our knowledge, none of the prior works demonstrate a heuristic to optimize depth of a network.  ... 
arXiv:1812.06224v3 fatcat:xmxnzrhrsrd2df5gjbycj6lpau
« Previous Showing results 1 — 15 out of 77 results