Filters








40,660 Hits in 6.3 sec

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples [article]

Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin
2018 arXiv   pre-print
The first one forces samples from out-of-distribution less confident by the classifier and the second one is for (implicitly) generating most effective training samples for the first one.  ...  The problem of detecting whether a test sample is from in-distribution (i.e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine  ...  CONFIDENT CLASSIFIER FOR OUT-OF-DISTRIBUTION Without loss of generality, suppose that the cross entropy loss is used for training.  ... 
arXiv:1711.09325v3 fatcat:xp5mtz3ou5cm5c562xbuzxqmle

Deep Anomaly Detection with Outlier Exposure [article]

Dan Hendrycks and Mantas Mazeika and Thomas Dietterich
2019 arXiv   pre-print
The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples.  ...  We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE).  ...  ACKNOWLEDGMENTS We thank NVIDIA for donating GPUs used in this research. This research was supported by a grant from the Future of Life Institute.  ... 
arXiv:1812.04606v3 fatcat:ysvcjchvvrchdgps7f63ve5yoi

Uncertainty Calibration for Deep Audio Classifiers [article]

Tong Ye, Shijing Si, Jianzong Wang, Ning Cheng, Jing Xiao
2022 arXiv   pre-print
In this work, we investigate the uncertainty calibration for deep audio classifiers.  ...  Results indicate that uncalibrated deep audio classifiers may be over-confident, and SNGP performs the best and is very efficient on the two datasets of this paper.  ...  For each sample in the in-distribution test set, and each out-of-distribution example, a confidence score is produced, which will be used to predict which distribution the samples come from.  ... 
arXiv:2206.13071v1 fatcat:uuvfzpkgvzfxfbol2lsyahcpmm

Calibrating Deep Neural Network Classifiers on Out-of-Distribution Datasets [article]

Zhihui Shao, and Jianyi Yang, Shaolei Ren
2020 arXiv   pre-print
Nonetheless, on an out-of-distribution (OOD) dataset in practice, the target DNN can often mis-classify samples with a high confidence, creating significant challenges for the existing calibration methods  ...  The key novelty of CCAC is an auxiliary class in the calibration model which separates mis-classified samples from correctly classified ones, thus effectively mitigating the target DNN's being confidently  ...  Misclassification detection metrics: While our main purpose is confidence calibration, a byproduct of our method is the better detection of mis-classified samples based on a threshold of the calibrated  ... 
arXiv:2006.08914v1 fatcat:4wkxrr74wzdgla3sizw46crypm

A Simple Framework for Robust Out-of-Distribution Detection

Youngbum Hur, Eunho Yang, Sung Ju Hwang
2022 IEEE Access  
Out-of-distribution (OOD) detection, i.e., identifying whether a given test sample is drawn from outside the training distribution, is essential for a deep classifier to be deployed in a real-world application  ...  Motivated by this, we propose a simple yet effective training scheme for further calibrating the softmax probability of a classifier to achieve high OOD detection performance under both hard and easy scenarios  ...  Jinwoo Shin and his Ph.D. student Jihoon Tack at the KAIST for their useful suggestions and valuable comments.  ... 
doi:10.1109/access.2022.3153723 fatcat:e6bodhr7fnblpejcyr42zoht2m

Toward Metrics for Differentiating Out-of-Distribution Sets [article]

Mahdieh Abbasi, Changjian Shui, Arezoo Rajabi, Christian Gagne, Rakesh Bobba
2020 arXiv   pre-print
Vanilla CNNs, as uncalibrated classifiers, suffer from classifying out-of-distribution (OOD) samples nearly as confidently as in-distribution samples.  ...  We also empirically show the effectiveness of a protective OOD set for training well-generalized confidence-calibrated vanilla CNNs.  ...  We thank Annette Schwerdtfeger for proofreading the paper.  ... 
arXiv:1910.08650v3 fatcat:zqkpiazpdzab7lsr5kewrbjt3u

On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration [article]

Kanil Patel, William Beluch, Dan Zhang, Michael Pfeiffer, Bin Yang
2021 arXiv   pre-print
of predicted uncertainty, or detection of out-of-distribution inputs.  ...  Variants of OMADA can employ different sampling schemes for ambiguous on-manifold examples based on the entropy of their estimated soft labels, which exhibit specific strengths for generalization, calibration  ...  classifier by injecting out-of-distribution samples into the training set.  ... 
arXiv:1912.07458v5 fatcat:dnebtyxpovc3jmlvc5h3esciey

SLOVA: Uncertainty Estimation Using Single Label One-Vs-All Classifier [article]

Bartosz Wójcik, Jacek Grela, Marek Śmieja, Krzysztof Misztal, Jacek Tabor
2022 arXiv   pre-print
Finally, our approach performs extremely well in the detection of out-of-distribution samples.  ...  Unlike the typical softmax function, SLOVA naturally detects out-of-distribution samples if the probabilities of all other classes are small.  ...  POIR.04.04.00-00-14DE/18-00) within the Team-Net program of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund.  ... 
arXiv:2206.13923v1 fatcat:7xdbftivhbcvvnmda6uwlynve4

CODEs: Chamfer Out-of-Distribution Examples against Overconfidence Issue [article]

Keke Tang, Dingruibo Miao, Weilong Peng, Jianpeng Wu, Yawen Shi, Zhaoquan Gu, Zhihong Tian, Wenping Wang
2021 arXiv   pre-print
Overconfident predictions on out-of-distribution (OOD) samples is a thorny issue for deep neural networks.  ...  Besides, we demonstrate CODEs are useful for improving OOD detection and classification.  ...  Besides, it has been validated that models for confidence calibration on the input distribution cannot be used for out-of-distribution [29] . Data Augmentation.  ... 
arXiv:2108.06024v1 fatcat:k3mwenolhbhqbkzqqmcdgqgv5a

One Versus all for deep Neural Network Incertitude (OVNNI) quantification [article]

Gianni Franchi, Andrei Bursuc, Emanuel Aldea, Severine Dubuisson, Isabelle Bloch
2020 arXiv   pre-print
On the other hand, the two types of classifiers enforce mutually their detection of out-of-distribution (OOD) samples, circumventing entirely the requirement of using such samples during training.  ...  On the one hand, the adjustment provided by the AVA DNN to the score of the base classifiers allows for a more fine-grained inter-class separation.  ...  in-from out-of-distributions samples.  ... 
arXiv:2006.00954v1 fatcat:u6s5nf57nzazradugbmfzwj6sa

Confidence-based Out-of-Distribution Detection: A Comparative Study and Analysis [article]

Christoph Berger, Magdalini Paschali, Ben Glocker, Konstantinos Kamnitsas
2021 arXiv   pre-print
For critical applications such as clinical decision making, it is important that a model can detect such out-of-distribution (OOD) inputs and express its uncertainty.  ...  In this work, we assess the capability of various state-of-the-art approaches for confidence-based OOD detection through a comparative study and in-depth analysis.  ...  Table 2 : 2 Performance of different methods for separation of out-of-distribution (OOD) from in-distribution (ID) samples for CheXpert in two settings.  ... 
arXiv:2107.02568v1 fatcat:26z53wikwjezvfkznxezke2g2i

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks [article]

Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin
2018 arXiv   pre-print
While most prior methods have been evaluated for detecting either out-of-distribution or adversarial samples, but not both, the proposed method achieves the state-of-the-art performances for both cases  ...  Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine  ...  For detecting out-of-distribution (OOD) samples, recent works have utilized the confidence from the posterior distribution [13, 21] .  ... 
arXiv:1807.03888v2 fatcat:kkgl5zrfdfhztk6hajgpvmhr5q

CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances [article]

Jihoon Tack, Sangwoo Mo, Jongheon Jeong, Jinwoo Shin
2020 arXiv   pre-print
Novelty detection, i.e., identifying whether a given sample is drawn from outside the training distribution, is essential for reliable machine learning.  ...  Specifically, in addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations  ...  Extension for training confidence-calibrated classifiers Furthermore, we propose an extension of CSI for training confidence-calibrated classifiers [22, 37] from a given labeled dataset {(x m , y m )  ... 
arXiv:2007.08176v2 fatcat:x44k3vhib5hwdfrnnqwlzzxgae

Investigation of Uncertainty of Deep Learning-based Object Classification on Radar Spectra [article]

Kanil Patel, William Beluch, Kilian Rambach, Adriana-Eliza Cozma, Michael Pfeiffer, Bin Yang
2021 arXiv   pre-print
Our investigation shows that further research into training and calibrating DL networks is necessary and offers great potential for safe automotive object classification with radar sensors.  ...  We show that by applying state-of-the-art post-hoc uncertainty calibration, the quality of confidence measures can be significantly improved,thereby partially resolving the over-confidence problem.  ...  The distribution of the correctly classified samples are depicted in blue and the mis-classified samples in red.  ... 
arXiv:2106.05870v1 fatcat:wicivzdfhrarpf7a3okninems4

Long-Tailed Recognition Using Class-Balanced Experts [article]

Saurabh Sharma, Ning Yu, Mario Fritz, Bernt Schiele
2020 arXiv   pre-print
experts that combines the strength of diverse classifiers.  ...  However, real-world datasets exhibit highly class-imbalanced distributions, yielding two main challenges: relative imbalance amongst the classes and data scarcity for mediumshot or fewshot classes.  ...  Out-of-distribution detection for experts The expert models identify samples from classes outside their class-balanced subset as out-of-distribution or OOD for short, therefore we train them using an out-of-distribution  ... 
arXiv:2004.03706v2 fatcat:u6tqhjxsabaobke7wlxda5pev4
« Previous Showing results 1 — 15 out of 40,660 results