Filters








85,316 Hits in 8.2 sec

Towards Out-of-Distribution Detection with Divergence Guarantee in Deep Generative Models [article]

Yufeng Zhang, Wanwei Liu, Zhenbang Chen, Ji Wang, Zhiming Liu, Kenli Li, Hongmei Wei
<span title="2021-09-16">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Recent research has revealed that deep generative models including flow-based models and Variational autoencoders may assign higher likelihood to out-of-distribution (OOD) data than in-distribution (ID  ...  However, we cannot sample out OOD data from the model. This counterintuitive phenomenon has not been satisfactorily explained.  ...  In this paper, we focus on unsupervised OOD detection using deep generative models (DGM) including flowbased model and VAE.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.03328v4">arXiv:2002.03328v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/abrack6vtze57pyoearwjlnavq">fatcat:abrack6vtze57pyoearwjlnavq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210918172921/https://arxiv.org/pdf/2002.03328v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f4/51/f4517bec00b8b22696d07857ec1e0f27266b1275.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.03328v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Provable Guarantees for Understanding Out-of-distribution Detection [article]

Peyman Morteza, Yixuan Li
<span title="2021-12-01">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Out-of-distribution (OOD) detection is important for deploying machine learning models in the real world, where test data from shifted distributions can naturally arise.  ...  Lastly, we formally provide provable guarantees and comprehensive analysis of our method, underpinning how various properties of data distribution affect the performance of OOD detection.  ...  Out-of-distribution (OOD) samples can naturally arise from an irrelevant distribution whose label set has no intersection with training categories, and therefore should not be predicted by the model.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.00787v1">arXiv:2112.00787v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/3jilc2nnwzexhlqrjnkypjltfm">fatcat:3jilc2nnwzexhlqrjnkypjltfm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211204045110/https://arxiv.org/pdf/2112.00787v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/07/5d/075ddf01cb0a722143ae04f49ef7dabcd64f38da.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.00787v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Certifiably Adversarially Robust Detection of Out-of-Distribution Data [article]

Julian Bitterwolf, Alexander Meinke, Matthias Hein
<span title="2021-03-10">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep neural networks are known to be overconfident when applied to out-of-distribution (OOD) inputs which clearly do not belong to any class.  ...  Moreover, in contrast to certified adversarial robustness which typically comes with significant loss in prediction performance, certified guarantees for worst case OOD detection are possible without much  ...  Acknowledgements The authors acknowledge support from the German Federal Ministry of Education and Research  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.08473v3">arXiv:2007.08473v3</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/qqwxjscnwnfkvbvw6fngh76hgy">fatcat:qqwxjscnwnfkvbvw6fngh76hgy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20201117004605/https://arxiv.org/pdf/2007.08473v2.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/91/56/9156b990a53db31f611f8a96cb9c5deee221cae2.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2007.08473v3" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Out-of-Distribution Example Detection in Deep Neural Networks using Distance to Modelled Embedding [article]

Rickard Sjögren, Johan Trygg
<span title="2021-08-24">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We present Distance to Modelled Embedding (DIME) that we use to detect out-of-distribution examples during prediction time.  ...  Adoption of deep learning in safety-critical systems raise the need for understanding what deep neural networks do not understand after models have been deployed.  ...  Kimin Lee at UC Berkeley for providing guidance on implementation details of Deep-Mahalanobis.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.10673v1">arXiv:2108.10673v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/k6qnq7hlzrfn5f5nnyrvo5no7e">fatcat:k6qnq7hlzrfn5f5nnyrvo5no7e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210829185045/https://arxiv.org/pdf/2108.10673v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/be/5e/be5efe9b7c6accd087a7428d51cc21f4e8d81a69.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.10673v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

A Simple Approach to Improve Single-Model Deep Uncertainty via Distance-Awareness [article]

Jeremiah Zhe Liu, Shreyas Padhy, Jie Ren, Zi Lin, Yeming Wen, Ghassen Jerfel, Zack Nado, Jasper Snoek, Dustin Tran, Balaji Lakshminarayanan
<span title="2022-05-01">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.  ...  Accurate uncertainty quantification is a major challenge in deep learning, as neural networks can make overconfident errors and assign high confidence predictions to out-of-distribution (OOD) inputs.  ...  The out-of-domain distribution p * (y|x, x ∈ X IND ) can in general be very different from the in-domain distribution p * (y|x, x ∈ X IND ), and it is usually expected that the model will only generalize  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.00403v1">arXiv:2205.00403v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/rhmaoiwiqzdspkkvqirkwkfcem">fatcat:rhmaoiwiqzdspkkvqirkwkfcem</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220506172444/https://arxiv.org/pdf/2205.00403v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6e/e4/6ee4f85a9050c259a5a6e7fe31ae26660e09a7c6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2205.00403v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Scarce Data Driven Deep Learning of Drones via Generalized Data Distribution Space [article]

Chen Li, Schyler C. Sun, Zhuangkun Wei, Antonios Tsourdos, Weisi Guo
<span title="2022-04-07">2022</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Due to the lack of diverse drone training data, accurate training of deep learning detection algorithms under scarce data is an open challenge.  ...  However, these methods cannot guarantee capturing diverse drone designs and fully understanding the deep feature space of drones.  ...  Related work One of the universal challenges in deep neural network training is when there is a lack of data, the out-of-sample performance can not be guaranteed [2] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.08244v2">arXiv:2108.08244v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/loj7rexq3rbadfs6gxx55ihfdi">fatcat:loj7rexq3rbadfs6gxx55ihfdi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210820062352/https://arxiv.org/pdf/2108.08244v1.pdf" title="fulltext PDF download [not primary version]" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <span style="color: #f43e3e;">&#10033;</span> <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/60/46/60469086c6b3569c595bfa88857c370bed3adf90.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2108.08244v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Algorithm for Out-Of-Distribution Attack to Neural Network Encoder [article]

Liang Liang, Linhai Ma, Linchen Qian, Jiasong Chen
<span title="2021-01-27">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
because of dimensionality reduction in the DNN models.  ...  Out-Of-Distribution (OOD) samples do not follow the distribution of training set, and therefore the predicted class labels on OOD samples become meaningless.  ...  Input complexity and out-of-distribution detection with likelihood-based generative models. Interna- tional Conference on Learning Representations 2020, 2019.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.08016v4">arXiv:2009.08016v4</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wpi3af3mmzgppbjgyqms5pltia">fatcat:wpi3af3mmzgppbjgyqms5pltia</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210202041050/https://arxiv.org/pdf/2009.08016v4.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/d9/84/d98428c351f20ef07f2cdccc8525205e4d93163b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2009.08016v4" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Generalized Out-of-Distribution Detection: A Survey [article]

Jingkang Yang, Kaiyang Zhou, Yixuan Li, Ziwei Liu
<span title="2021-10-21">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Out-of-distribution (OOD) detection is critical to ensuring the reliability and safety of machine learning systems.  ...  In this survey, we first present a generic framework called generalized OOD detection, which encompasses the five aforementioned problems, i.e., AD, ND, OSR, OOD detection, and OD.  ...  Out-of-Distribution Detection Background With the observation that deep learning models can overconfidently classify samples from different semantic distributions, the field of out-of-distribution detection  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.11334v1">arXiv:2110.11334v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bfx67gnn6zcr5emwcrfzxs4tom">fatcat:bfx67gnn6zcr5emwcrfzxs4tom</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211024110448/https://arxiv.org/pdf/2110.11334v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/82/0f/820f81f4263ad84b581e2b0feec563db0c64ff9b.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2110.11334v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Towards neural networks that provably know when they don't know [article]

Alexander Meinke, Matthias Hein
<span title="2020-02-21">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees.  ...  of an out-distribution point.  ...  A generalization guarantee for an out-of-distribution detection scheme is provided in Liu et al. (2018) .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.12180v2">arXiv:1909.12180v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ieybxvbtezeoxi32ehtdr76sua">fatcat:ieybxvbtezeoxi32ehtdr76sua</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200322015415/https://arxiv.org/pdf/1909.12180v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1909.12180v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Uncertainty Estimation Using a Single Deep Deterministic Neural Network [article]

Joost van Amersfoort, Lewis Smith, Yee Whye Teh, Yarin Gal
<span title="2020-06-29">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.  ...  By enforcing detectability of changes in the input using a gradient penalty, we are able to reliably detect out of distribution data.  ...  While generative models are a promising avenue for out of distribution detection, they are not able to assess predictive uncertainty; given that a data point is in distribution, can our discriminative  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.02037v2">arXiv:2003.02037v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/x52jy6fycvazpgwgwmb7hbeehi">fatcat:x52jy6fycvazpgwgwmb7hbeehi</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200703035304/https://arxiv.org/pdf/2003.02037v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/bf/9f/bf9f15f971641bd4c57cd4236e826bf1dc0162bb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.02037v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

An Unsupervised Learning Approach for Early Damage Detection by Time Series Analysis and Deep Neural Network to Deal with Output-Only (Big) Data

Alireza Entezami, Hassan Sarmadi, Stefano Mariani
<span title="2020-11-14">2020</span> <i title="MDPI AG"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/yfqwt2bemreo7klxh7f5kn22jm" style="color: black;">Engineering Proceedings</a> </i> &nbsp;
To enhance the current damage detection procedures, in this work we propose an unsupervised learning method based on time series analysis, deep learning and the Mahalanobis distance metric for feature  ...  The main novelty of this strategy is the simultaneous dealing with the significant issue of Big Data analytics for damage detection, and distinguishing damage states from the undamaged one in an unsupervised  ...  on the generalized extreme value distribution and block maxima technique [10, 12] .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/ecsa-7-08281">doi:10.3390/ecsa-7-08281</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/tufrlz6we5f5znif57uqu3yci4">fatcat:tufrlz6we5f5znif57uqu3yci4</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210715052125/https://re.public.polimi.it/retrieve/handle/11311/1169765/611850/engproc-02-00017.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/01/ab/01abbc6bcee945b30d7834fa339e5620b55e7992.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.3390/ecsa-7-08281"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> mdpi.com </button> </a>

Memory Augmented Generative Adversarial Networks for Anomaly Detection [article]

Ziyi Yang, Teng Zhang, Iman Soltani Bozchalooi, Eric Darve
<span title="2020-02-07">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Classical anomaly detection algorithms focus on learning to model and generate normal data, but typically guarantees for detecting anomalous data are weak.  ...  The proposed Memory Augmented Generative Adversarial Networks (MEMGAN) interacts with a memory module for both the encoding and generation processes.  ...  Deep generative models are capable of learning the normal data distribution q(x), e.g., generative adversarial networks (GANs) proposed in Goodfellow et al. (2014) .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.02669v1">arXiv:2002.02669v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/idc6cbygozcfpjyapqkvpnihie">fatcat:idc6cbygozcfpjyapqkvpnihie</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200321160757/https://arxiv.org/pdf/2002.02669v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2002.02669v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Privacy-Preserving Case-Based Explanations: Enabling Visual Interpretability by Protecting Privacy

Helena Montenegro, Wilson Silva, Alex Gaudio, Matt Fredrikson, Asim Smailagic, Jaime S. Cardoso
<span title="">2022</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
An intuitive way to improve the interpretability of Deep Learning models is by explaining their decisions with similar cases.  ...  Finally, we identify and propose new lines of research to guide future work in the generation of privacy-preserving case-based explanations.  ...  BACKGROUND ON DEEP GENERATIVE MODELS In this section, we provide background on deep generative models, as they are used in several case-based interpretability and privacy-preserving methods.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2022.3157589">doi:10.1109/access.2022.3157589</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/nuehxhxtw5a2rklceqv233foum">fatcat:nuehxhxtw5a2rklceqv233foum</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20220326134756/https://ieeexplore.ieee.org/ielx7/6287639/9668973/09729808.pdf?tp=&amp;arnumber=9729808&amp;isnumber=9668973&amp;ref=" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/88/9a/889adf55eea889df793887b9fa50453925b405f3.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2022.3157589"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

Anomalous Example Detection in Deep Learning: A Survey [article]

Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, Dawn Song
<span title="2021-02-19">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep Learning (DL) is vulnerable to out-of-distribution and adversarial examples resulting in incorrect outputs.  ...  We discuss various techniques in each of the categories and provide the relative strengths and weaknesses of the approaches.  ...  Acknowledgement This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.06979v2">arXiv:2003.06979v2</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/4mogo75b4rbxrc6vph2xmllkue">fatcat:4mogo75b4rbxrc6vph2xmllkue</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20210225150728/https://arxiv.org/pdf/2003.06979v2.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/71/b9/71b9d8b8d6937d7f5f01483eb2a2379e282eef11.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2003.06979v2" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Deep Visual Waterline Detection within Inland Marine Environment [article]

Jing Huang, Hengfeng Miao, Lin Li, Yuanqiao Wen, Changshi Xiao
<span title="2019-11-24">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
This paper attempts to find a solution to guarantee the effectiveness of waterline detection for inland maritime applications with general digital camera sensor.  ...  To this end, a general deep-learning-based paradigm applicable in variable inland waters, named DeepWL, is proposed, which concerns the efficiency of waterline detection simultaneously.  ...  In this paper, we proposed a novel visual detection approach to identifying inland waterlines with general digital camera by the use of deep learning techniques, which aimed to guarantee the effectiveness  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.10498v1">arXiv:1911.10498v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/wotdtwlh5faqta73hzcjrcpezu">fatcat:wotdtwlh5faqta73hzcjrcpezu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200831020753/https://arxiv.org/pdf/1911.10498v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/7b/86/7b86289686ffb2054ce0983c209e17aaa124f407.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.10498v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 85,316 results