Filters








54 Hits in 5.8 sec

Deep Deterministic Uncertainty: A Simple Baseline [article]

Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H.S. Torr, Yarin Gal
<span title="2022-01-28">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This conceptually simple *Deep Deterministic Uncertainty (DDU)* baseline can also be used to disentangle aleatoric and epistemic uncertainty and performs as well as Deep Ensembles, the state-of-the art  ...  Crucially, without using their more complex methods for estimating uncertainty, a single softmax neural net with such a feature-space, achieved via residual connections and spectral normalization, *outperforms  ...  As mentioned in §3, DDU consists of a deterministic softmax model trained with appropriate inductive biases.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.11582v3">arXiv:2102.11582v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tt75wgvfxvdnbh3vabtj62rjwy">fatcat:tt75wgvfxvdnbh3vabtj62rjwy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220206195311/https://arxiv.org/pdf/2102.11582v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6e/84/6e84d6788bdd1f99a0ed322cf35ae7b2fb81aa66.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.11582v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Stochastic Segmentation Networks: Modelling Spatially Correlated Aleatoric Uncertainty [article]

Miguel Monteiro, Loïc Le Folgoc, Daniel Coelho de Castro, Nick Pawlowski, Bernardo Marques, Konstantinos Kamnitsas, Mark van der Wilk, Ben Glocker
<span title="2020-12-22">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In this paper, we introduce stochastic segmentation networks (SSNs), an efficient probabilistic method for modelling aleatoric uncertainty with any image segmentation network architecture.  ...  SSNs outperform state-of-the-art for modelling correlated uncertainty in ambiguous images while being much simpler, more flexible, and more efficient.  ...  Methods will often rely on inductive biases to capture structure as opposed to modelling it directly.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.06015v2">arXiv:2006.06015v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/pdh7d57hmve2bhrkesh27q3otu">fatcat:pdh7d57hmve2bhrkesh27q3otu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200613003543/https://arxiv.org/pdf/2006.06015v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/55/b3/55b3f305a7749953e07f6cf522b0d40bd578c2a6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.06015v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods

Eyke Hüllermeier, Willem Waegeman
<span title="2021-03-08">2021</span> <i title="Springer Science and Business Media LLC"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/h4nnd7sxwzcwhetu5qkjbcdh6u" style="color: black;">Machine Learning</a> </i> &nbsp;
In particular, this includes the importance of distinguishing between (at least) two different types of uncertainty, often referred to as aleatoric and epistemic.  ...  In line with the statistical tradition, uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions.  ...  as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s10994-021-05946-3">doi:10.1007/s10994-021-05946-3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6dndhzin5fgnrp4bjfer47mnt4">fatcat:6dndhzin5fgnrp4bjfer47mnt4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210419114439/https://biblio.ugent.be/publication/8703853/file/8703855" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/1a/37/1a37c9c8e1289c3c57aaa5fd5189bd9c8475a259.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1007/s10994-021-05946-3"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> springer.com </button> </a>

Assigning Confidence to Molecular Property Prediction [article]

AkshatKumar Nigam, Robert Pollice, Matthew F. D. Hurley, Riley J. Hickman, Matteo Aldeghi, Naruki Yoshikawa, Seyone Chithrananda, Vincent A. Voelz, Alán Aspuru-Guzik
<span title="2021-02-23">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Lastly, we investigate how these uncertainties propagate to generative models, as they are usually coupled with property predictors.  ...  First, our considerations for assessing confidence begin with dataset bias and size, data-driven property prediction and feature design.  ...  Replacing point-estimated network parameters with distributions allows for the quantification of epistemic uncertainty as well as heteroscedastic aleatoric uncertainty.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.11439v1">arXiv:2102.11439v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/vblssmndlvchbjmubsnjjjppcy">fatcat:vblssmndlvchbjmubsnjjjppcy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210227101201/https://arxiv.org/pdf/2102.11439v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b3/3a/b33a170a4db9078d702d0d4c2e50cabe44c4c762.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2102.11439v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Understanding Softmax Confidence and Uncertainty [article]

Tim Pearce, Alexandra Brintrup, Jun Zhu
<span title="2021-06-09">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper investigates this contradiction, identifying two implicit biases that do encourage softmax confidence to correlate with epistemic uncertainty: 1) Approximately optimal decision boundary structure  ...  It is often remarked that neural networks fail to increase their uncertainty when predicting on data far from the training distribution.  ...  Torr, and Yarin Gal. Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty. ArXiv, 2021. URL http://arxiv.org/abs/2102.11582. Oleg R.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.04972v1">arXiv:2106.04972v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/k6fhqt7vlzhspmz6u4nlpumbny">fatcat:k6fhqt7vlzhspmz6u4nlpumbny</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210618191945/https://arxiv.org/pdf/2106.04972v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fd/ae/fdae6cb9beff9749009798b95192a4549f8761a2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.04972v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Review of Uncertainty Quantification in Deep Learning: Techniques, Applications and Challenges [article]

Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, Vladimir Makarenkov, Saeid Nahavandi
<span title="2021-01-06">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Uncertainty quantification (UQ) plays a pivotal role in reduction of uncertainties during both optimization and decision making processes.  ...  In this regard, researchers have proposed different UQ methods and examined their performance in a variety of applications such as computer vision (e.g., self-driving cars and object detection), image  ...  The ensemble method helped to capture both aleatoric and epistemic uncertainty.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.06225v4">arXiv:2011.06225v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wwnl7duqwbcqbavat225jkns5u">fatcat:wwnl7duqwbcqbavat225jkns5u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210113234503/https://arxiv.org/pdf/2011.06225v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f1/4f/f14fc9e399d44463a17cc47a9b339b58f6ef7502.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2011.06225v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Getting a CLUE: A Method for Explaining Uncertainty Estimates [article]

Javier Antorán, Umang Bhatt, Tameem Adel, Adrian Weller, José Miguel Hernández-Lobato
<span title="2021-03-18">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We address this gap by proposing a novel method for interpreting uncertainty estimates from differentiable probabilistic models, like Bayesian Neural Networks (BNNs).  ...  We validate CLUE through 1) a novel framework for evaluating counterfactual explanations of uncertainty, 2) a series of ablation experiments, and 3) a user study.  ...  Modeling Epistemic and Aleatoric Uncertainty with Bayesian Neural Networks and Latent Variables. PhD thesis, Technical University of Munich, 2019.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.06848v2">arXiv:2006.06848v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/5chka3x76ngoxev5y7umed42n4">fatcat:5chka3x76ngoxev5y7umed42n4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210327015953/https://arxiv.org/pdf/2006.06848v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0c/84/0c84ec883ce29b6b491ff668ec74b968dced04da.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2006.06848v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Posterior Meta-Replay for Continual Learning [article]

Christian Henning, Maria R. Cervera, Francesco D'Angelo, Johannes von Oswald, Regina Traber, Benjamin Ehret, Seijin Kobayashi, Benjamin F. Grewe, João Sacramento
<span title="2021-10-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Experiments on standard benchmarks show that our probabilistic hypernetworks compress sequences of posterior parameter distributions with virtually no forgetting.  ...  In principle, Bayesian learning directly applies to this setting, since recursive and one-off Bayesian updates yield the same result.  ...  We are grateful for discussions with Harald Dermutz, Simone Carlo Surace and Jean-Pascal Pfister.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.01133v3">arXiv:2103.01133v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4tjj74x74vew7gqif4atmg5qjm">fatcat:4tjj74x74vew7gqif4atmg5qjm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211023120102/https://arxiv.org/pdf/2103.01133v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6d/ca/6dca98ebce5cbdeedd1911dde18e66771b855506.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2103.01133v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Understanding Uncertainty in Bayesian Deep Learning [article]

Cooper Lorsung
<span title="2021-05-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Neural Linear Models (NLM) are deep Bayesian models that produce predictive uncertainty by learning features from the data and then performing Bayesian linear regression over these features.  ...  We identify the underlying reasons for this behavior and propose a novel training method that can both capture useful predictive uncertainties as well as allow for incorporation of domain knowledge.  ...  Uncertainty Bayesian statistics makes interpreting uncertainty intuitive. Types of Uncertainty Broadly, the two type of uncertainty are aleatoric and epistemic uncertainty.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.13055v1">arXiv:2106.13055v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/x43iypncb5b2nischk45e3jdcy">fatcat:x43iypncb5b2nischk45e3jdcy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210626121659/https://arxiv.org/pdf/2106.13055v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f1/1f/f11fee076df027d8d9b5384c27ed89fbd7d3d5d1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.13055v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Scene Uncertainty and the Wellington Posterior of Deterministic Image Classifiers [article]

Stephanie Tsuei, Aditya Golatkar, Stefano Soatto
<span title="2021-06-25">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep neural networks commonly used for image classification are deterministic maps from an input image to an output class.  ...  Additional alternatives include generative adversarial networks, conditional prior networks, and supervised single-view reconstruction.  ...  For instance, Bayesian Neural Networks [23, 13, 30, 27] produce not a single discriminant, but a distribution that captures epistemic uncertainty, from which one can obtain a distribution of outcomes  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.13870v1">arXiv:2106.13870v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4uzrbsuhczgnpbiu33sbimr7qi">fatcat:4uzrbsuhczgnpbiu33sbimr7qi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210630012521/https://arxiv.org/pdf/2106.13870v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/4e/ad/4ead079413dad33c1791e96c85aded0397fb6c51.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.13870v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Learning Structured Gaussians to Approximate Deep Ensembles [article]

Ivor J.A. Simpson, Sara Vicente, Neill D.F. Campbell
<span title="2022-03-29">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Importantly, this approach captures the uncertainty and structured correlations in the predictions explicitly in a formal distribution, rather than implicitly through sampling alone.  ...  This is achieved through a convolutional neural network that predicts the mean and covariance of the distribution, where the inverse covariance is parameterised by a sparsely structured Cholesky matrix  ...  As a deterministic approximation to the output of an ensemble, we seek to capture all forms of uncertainty captured by the ensemble (e.g. aleatoric and epistemic).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.15485v1">arXiv:2203.15485v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/elsljl5vi5acdhe2oxvwwyfyhe">fatcat:elsljl5vi5acdhe2oxvwwyfyhe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220622150910/https://arxiv.org/pdf/2203.15485v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/b6/8a/b68af3efab24cfb6b113f2452f9928ecd86ca2eb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2203.15485v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Survey of Uncertainty in Deep Neural Networks [article]

Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, Muhammad Shahzad, Wen Yang (+2 others)
<span title="2022-01-18">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different  ...  As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed.  ...  (2) the concepts of aleatoric and epistemic uncertainty in neural networks and discussed different concepts to model and For a new data sample x∗ ∈ X, a neural network trained on  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.03342v3">arXiv:2107.03342v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/cex5j3xq5fdijjdtdbt2ixralm">fatcat:cex5j3xq5fdijjdtdbt2ixralm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220127192259/https://arxiv.org/pdf/2107.03342v3.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/fc/70/fc70db46738fff97d9ee3d66c6f9c57794d7b4fa.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2107.03342v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Graceful Degradation and Related Fields [article]

Jack Dymond
<span title="2021-06-24">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In passive approaches, graceful degradation is handled and achieved by the model in a self-contained manner, in active approaches the model is updated upon encountering epistemic uncertainties.  ...  This work presents a definition and discussion of graceful degradation and where it can be applied in deployed visual systems.  ...  These types of uncertainty are known as aleatoric and epistemic uncertainty (Hüllermeier & Waegeman (2020) ).  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.11119v2">arXiv:2106.11119v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rxahwlfnm5hg7ln7uutgnkzdwq">fatcat:rxahwlfnm5hg7ln7uutgnkzdwq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210626131228/https://arxiv.org/pdf/2106.11119v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/95/87/95870b1cd7bbc23f364e4bf98d5c6081e27060be.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2106.11119v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Bayesian Deep Learning and a Probabilistic Perspective of Generalization [article]

Andrew Gordon Wilson, Pavel Izmailov
<span title="2022-03-30">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
From this perspective, we explain results that have been presented as mysterious and distinct to neural network generalization, such as the ability to fit images with random labels, and show that these  ...  Bayesian marginalization can particularly improve the accuracy and calibration of modern deep neural networks, which are typically underspecified by the data, and can represent many compelling but different  ...  AGW and PI are supported by an Amazon Research Award, Facebook Research, NSF I-DISRE 193471, NIH R01 DA048764-01A1, NSF IIS-1563887, and NSF IIS-1910266. We thank Greg Benton for helpful discussions.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.08791v4">arXiv:2002.08791v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bvezqsveg5bjppygv6saqeng2m">fatcat:bvezqsveg5bjppygv6saqeng2m</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220508074339/https://arxiv.org/pdf/2002.08791v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/af/92/af9280741ef627f0d6c8437605d002d3bfc2d1b1.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.08791v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Recent Advances in Video Analytics for Rail Network Surveillance for Security, Trespass and Suicide Prevention—A Survey

Tianhao Zhang, Waqas Aftab, Lyudmila Mihaylova, Christian Langran-Wheeler, Samuel Rigby, David Fletcher, Steve Maddock, Garry Bosworth
<span title="2022-06-07">2022</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/taedaf6aozg7vitz5dpgkojane" style="color: black;">Sensors</a> </i> &nbsp;
Railway networks systems are by design open and accessible to people, but this presents challenges in the prevention of events such as terrorism, trespass, and suicide fatalities.  ...  State-of-the-art methods for object detection and behaviour recognition applied to rail network surveillance systems are introduced, and the ethics of handling personal data and the use of automated systems  ...  The method of Kendall and Gal [119] estimates the aleatoric and epistemic uncertainties by constructing a Bayesian neural network with the last layer before activation consisting of mean and variance  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s22124324">doi:10.3390/s22124324</a> <a target="_blank" rel="external noopener" href="https://www.ncbi.nlm.nih.gov/pubmed/35746103">pmid:35746103</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/reaok5eq2rdsloxnqimc4yyfxe">fatcat:reaok5eq2rdsloxnqimc4yyfxe</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220616002340/https://mdpi-res.com/d_attachment/sensors/sensors-22-04324/article_deploy/sensors-22-04324-v2.pdf?version=1654670216" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0c/9a/0c9ab9979aab2c94b407be4cd76fef906e3a12af.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/s22124324"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 54 results