A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
In-N-Out: Pre-Training and Self-Training using Auxiliary Information for Out-of-Distribution Robustness
[article]
2021
arXiv
pre-print
To get the best of both worlds, we introduce In-N-Out, which first trains a model with auxiliary inputs and uses it to pseudolabel all the in-distribution inputs, then pre-trains a model on OOD auxiliary ...
error but can hurt OOD error; but (ii) using auxiliary information as outputs of auxiliary pre-training tasks improves OOD error. ...
We use a smaller learning rate of 0.01 for this pre-trainined model. In-N-Out and Repeated self-training. ...
arXiv:2012.04550v3
fatcat:va3qhyxnjzhnbgfozkt7blm72y
A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges
[article]
2021
arXiv
pre-print
Despite having similar and shared concepts, out-of-distribution, open-set, and anomaly detection have been investigated independently. ...
Failure to recognize an out-of-distribution (OOD) sample, and consequently assign that sample to an in-class label significantly compromises the reliability of a model. ...
Asano for the extremely useful discussions and for reviewing the paper prior to submission. ...
arXiv:2110.14051v1
fatcat:zqfomgebjjb3zl4snmkrojqdny
WILDS: A Benchmark of in-the-Wild Distribution Shifts
[article]
2021
arXiv
pre-print
This gap remains even with models trained by existing methods for tackling distribution shifts, underscoring the need for new methods for training models that are more robust to the types of distribution ...
On each dataset, we show that standard training yields substantially lower out-of-distribution than in-distribution performance. ...
Acknowledgements Many people generously volunteered their time and expertise to advise us on Wilds. ...
arXiv:2012.07421v3
fatcat:bsohmukpszajxeadeo25oxmbs4
Advances in Electron Microscopy with Deep Learning
2020
Zenodo
and automatic data clustering by t-distributed stochastic neighbour embedding; adaptive learning rate clipping to stabilize learning; generative adversarial networks for compressed sensing with spiral ...
, uniformly spaced and other fixed sparse scan paths; recurrent neural networks trained to piecewise adapt sparse scan paths to specimens by reinforcement learning; improving signal-to-noise; and conditional ...
Acknowledgements Thanks go to Jeremy Sloan and Martin Lotz for internally reviewing this article. ...
doi:10.5281/zenodo.4598227
fatcat:hm2ksetmsvf37adjjefmmbakvq
Advances in Electron Microscopy with Deep Learning
2020
Zenodo
and automatic data clustering by t-distributed stochastic neighbour embedding; adaptive learning rate clipping to stabilize learning; generative adversarial networks for compressed sensing with spiral ...
, uniformly spaced and other fixed sparse scan paths; recurrent neural networks trained to piecewise adapt sparse scan paths to specimens by reinforcement learning; improving signal-to-noise; and conditional ...
Acknowledgements Thanks go to Jeremy Sloan and Martin Lotz for internally reviewing this article. ...
doi:10.5281/zenodo.4591029
fatcat:zn2hvfyupvdwlnvsscdgswayci
Advances in Electron Microscopy with Deep Learning
2020
Zenodo
and automatic data clustering by t-distributed stochastic neighbour embedding; adaptive learning rate clipping to stabilize learning; generative adversarial networks for compressed sensing with spiral ...
, uniformly spaced and other fixed sparse scan paths; recurrent neural networks trained to piecewise adapt sparse scan paths to specimens by reinforcement learning; improving signal-to-noise; and conditional ...
Acknowledgements Thanks go to Jeremy Sloan and Martin Lotz for internally reviewing this article. ...
doi:10.5281/zenodo.4399748
fatcat:63ggmnviczg6vlnqugbnrexsgy
Advances in Electron Microscopy with Deep Learning
2020
Zenodo
and automatic data clustering by t-distributed stochastic neighbour embedding; adaptive learning rate clipping to stabilize learning; generative adversarial networks for compressed sensing with spiral ...
, uniformly spaced and other fixed sparse scan paths; recurrent neural networks trained to piecewise adapt sparse scan paths to specimens by reinforcement learning; improving signal-to-noise; and conditional ...
Acknowledgements Thanks go to Jeremy Sloan and Martin Lotz for internally reviewing this article. ...
doi:10.5281/zenodo.4413249
fatcat:35qbhenysfhvza2roihx52afuy
Advances in Electron Microscopy with Deep Learning
2020
Zenodo
and automatic data clustering by t-distributed stochastic neighbour embedding; adaptive learning rate clipping to stabilize learning; generative adversarial networks for compressed sensing with spiral ...
, uniformly spaced and other fixed sparse scan paths; recurrent neural networks trained to piecewise adapt sparse scan paths to specimens by reinforcement learning; improving signal-to-noise; and conditional ...
Acknowledgements Thanks go to Jeremy Sloan and Martin Lotz for internally reviewing this article. ...
doi:10.5281/zenodo.4429792
fatcat:qs6yuapx4vdbdmwna7ix7nnwty
Advances in Electron Microscopy with Deep Learning
2020
Zenodo
and automatic data clustering by t-distributed stochastic neighbour embedding; adaptive learning rate clipping to stabilize learning; generative adversarial networks for compressed sensing with spiral ...
, uniformly spaced and other fixed sparse scan paths; recurrent neural networks trained to piecewise adapt sparse scan paths to specimens by reinforcement learning; improving signal-to-noise; and conditional ...
Acknowledgements Thanks go to Jeremy Sloan and Martin Lotz for internally reviewing this article. ...
doi:10.5281/zenodo.4415407
fatcat:6dejwzzpmfegnfuktrld6zgpiq
Pareto Navigation Gradient Descent: a First-Order Algorithm for Optimization in Pareto Set
[article]
2021
arXiv
pre-print
We propose a first-order algorithm that approximately solves OPT-in-Pareto using only gradient information, with both high practical efficiency and theoretically guaranteed convergence property. ...
Unfortunately, despite being a highly useful framework, efficient algorithms for OPT-in-Pareto have been largely missing, especially for large-scale, non-convex, and non-linear objectives in deep learning ...
To apply Ma et al. (2020), in the first stage, we need to start with several well distributed models (i.e., the ones obtained by linear scalarization with different preference weights) and Ma et al. (2020 ...
arXiv:2110.08713v1
fatcat:ffmcvsq6vvei3env2obnhzabni
Review: Deep Learning in Electron Microscopy
[article]
2020
arXiv
pre-print
For context, we review popular applications of deep learning in electron microscopy. ...
We then review neural network components, popular architectures, and their optimization. Finally, we discuss future directions of deep learning in electron microscopy. ...
Acknowledgements Thanks go to Jeremy Sloan and Martin Lotz for internally reviewing this article. ...
arXiv:2009.08328v4
fatcat:umocfp5dgvfqzck4ontlflh5ca
Deep representation learning for speech recognition
[article]
2021
We use attribute-aware and adaptive training strategies to model the underlying factors of variation related to the speakers and the acoustic conditions. ...
First and foremost, we aim to improve the robustness of the acoustic models. ...
We call the information update "intra-frequency" when the number of groups n in = n out , and "inter-frequency" when n in = n out . ...
doi:10.7488/era/1174
fatcat:g3bwcvkvuvdojd2nzawj4qtxq4
Applications of Deep Neural Networks with Keras
[article]
2022
arXiv
pre-print
Deep learning is a group of exciting new technologies for neural networks. ...
Through a combination of advanced training techniques and neural network architectural components, it is now possible to create neural networks that can handle tabular data, images, text, and audio as ...
Specifically, Equation 4.2 can determine the standard deviation: V ar(W ) = 2 n in + n out The above equation shows how to obtain the variance for all weights. ...
arXiv:2009.05673v5
fatcat:h3jghqylwrbfvfglmwutlfpmay
A multi-paradigm approach supporting the modular execution of reconfigurable hybrid systems
2010
Simulation (San Diego, Calif.)
We present in this paper how our component-based approach for reconfigurable mechatronic systems, MECHATRONIC UML, efficiently handles the complex interplay of discrete behavior and continuous behavior ...
Therefore, a tight integration of structural and behavioral models of the different domains is required. ...
In particular we thank Sven Burmester for his substantial contributions and Tobias Eckardt for proof-reading. ...
doi:10.1177/0037549710366824
fatcat:nkj53wi2wna6vj7qcs5idtnjwe
Application of Prior Information to Discriminative Feature Learning
2018
Our proposed approach automatically selects the most useful low-level features and effectively combines them into more powerful and discriminative features for our specific image classification problem ...
When multiple independent factors exist in the image generation process and only some of them are of interest to us, we propose a novel multi-task adversarial network to learn a disentangled feature which ...
Detailed information about each model is shown in Table 4 .6, including the pre-train auxiliary dataset and the number of scales. ...
doi:10.17863/cam.32915
fatcat:gto4zcwzgnhk5hjidq5k5uwf3u
« Previous
Showing results 1 — 15 out of 25 results