A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2014; you can also visit the original URL.
The file type is application/pdf
.
Filters
Classifier Loss Under Metric Uncertainty
[chapter]
Lecture Notes in Computer Science
First, to identify model-selection metrics that lead to stronger cross-metric performance, we characterize the expected loss where the selection metric is held fixed and the evaluation metric is varied ...
Classifiers that are deployed in the field can be used and evaluated in ways that were not anticipated when the model was trained. ...
But we see a much tighter distribution at the vertex of the wedge for classifiers that do perform well under both metrics. ...
doi:10.1007/978-3-540-74958-5_30
fatcat:kawgd7h3tnal5b5yhcc3s7nxo4
Measuring and Improving Model-Moderator Collaboration using Uncertainty Estimation
[article]
2021
arXiv
pre-print
Using these metrics, we conduct a large benchmark study evaluating the performance of state-of-the-art uncertainty models under different collaborative review strategies. ...
First, we introduce principled metrics to describe the performance of the collaborative system under capacity constraints on the human moderator, quantifying how efficiently the combined system utilizes ...
We find that the deterministic model with focal loss is over-confident for predictions under 0.5, and under-confident above 0.5, while the SNGP models are still over-confident, although to a lesser degree ...
arXiv:2107.04212v1
fatcat:q5dia34jgrf55e53nuomgqr6fa
Closeness and Uncertainty Aware Adversarial Examples Detection in Adversarial Machine Learning
[article]
2021
arXiv
pre-print
We first leverage the usage of moment-based predictive uncertainty estimates of a DNN classifier obtained using Monte-Carlo Dropout Sampling. ...
In this work, we explore and assess the usage of different type of metrics for detecting adversarial samples. ...
In Section 3, we will introduce the notion of uncertainty together with its main types and discuss how we can quantify different uncertainty metrics for a DNN classifier. ...
arXiv:2012.06390v2
fatcat:4tjrqtxftbbilko2yrhvdt4u4u
Out-of-Distribution Detection for Automotive Perception
[article]
2021
arXiv
pre-print
This combination improves the area under the precision recall curve (AUPR) metric compared to state-of-the-art methods. ...
The cosine similarity metric correctly detects it as D out whereas the softmax metric incorrectly classifies it as D in . ...
arXiv:2011.01413v2
fatcat:r37oqrxvifekpircsutjv5jf7u
Uncertainty Flow Facilitates Zero-Shot Multi-Label Learning in Affective Facial Analysis
2018
Applied Sciences
More importantly, based on the derived weight uncertainty, three sets of prediction related uncertainty indexes, i.e., soft-max uncertainty, pure uncertainty and uncertainty plus are proposed to produce ...
I.e., the model uncertainty-which can be quantified in Uncertainty Flow-is distilled from a single-label learning task. ...
Hence, it is imperative to investigate each loss metric independently. ...
doi:10.3390/app8020300
fatcat:q7nuexgum5aljbtb75yw26bxbe
nn-dependability-kit: Engineering Neural Networks for Safety-Critical Autonomous Driving Systems
[article]
2019
arXiv
pre-print
In particular, the tool realizes recent scientific results including (a) novel dependability metrics for indicating sufficient elimination of uncertainties in the product life cycle, (b) formal reasoning ...
performance loss metric Sn4 Neuron k-activation coverage metric Sn5 Interpret. precision metric Sn6 Occlusion sensitivity metric Sn8 Perturbation loss metric Sn9 Static analysis / formal verification ...
from nndependability.metrics import PerturbationLoss metric = PerturbationLoss.Perturbation Loss Metric() ... metric.addInputs(net, image, label) ... metric.printMetricQuantity("AVERAGE LOSS")
or to ...
arXiv:1811.06746v2
fatcat:nyfqvxi3rvga5fkseawou6r6b4
The MBPEP: a deep ensemble pruning algorithm providing high quality uncertainty prediction
2019
Applied intelligence (Boston)
In addition to these networks, unique loss functions are proposed, and these functions make the sub-learners available for standard gradient descent learning. ...
Several experiments including predicting uncertainties of classification and regression are conducted to analyze the performance of MBPEP. ...
The loss metric in Tab. 4 shows that our model achieves the minimum learning losses on most of datasets. ...
doi:10.1007/s10489-019-01421-8
fatcat:jpbn23mki5h3ddugt5uqpizhsm
Exploring the Limits of Epistemic Uncertainty Quantification in Low-Shot Settings
[article]
2021
arXiv
pre-print
Gradient-based methods seem to poorly estimate epistemic uncertainty and are the most affected by training set size. ...
In this paper we evaluate seven uncertainty methods on Fashion MNIST and CIFAR10, as we sub-sample and produce varied training set sizes. ...
Direct Uncertainty Quantification (DUQ) This method [15] replaces the standard softmax classifier with a radial basis function (RBF) classifier, where the output layer learns a weight matrix and a centroid ...
arXiv:2111.09808v1
fatcat:7glbn6dnjvdndih26aezls2k6i
Identifying Incorrect Classifications with Balanced Uncertainty
[article]
2021
arXiv
pre-print
(BTCP) framework, which learns an uncertainty estimator with a novel Distributional Focal Loss (DFL) objective. ...
Uncertainty estimation is critical for cost-sensitive deep-learning applications (i.e. disease diagnosis). ...
#classifier
for epoch = 1,2,...... do
Predict the output softmax by classifier;
Compute the CrossEntropy loss;
Update the parameters of classifier and
encoder;
end
#uncertainty estimator
for epoch ...
arXiv:2110.08030v1
fatcat:n2mpdzzczrenzhsuqxkdmomh7a
Uncertainty Quantification in Segmenting Tuberculosis-Consistent Findings in Frontal Chest X-rays
2022
Biomedicines
We determine an optimal uncertainty threshold based on several uncertainty-related metrics. This threshold is used to select and refer highly uncertain cases to an expert. ...
In this study, we aim to investigate the benefits of (i) selecting an appropriate loss function and (ii) quantifying uncertainty in predictions using a VGG16-based-U-Net model with the Monto–Carlo (MCD ...
Rather, it is to validate the use of appropriate loss functions suiting the data under study and quantify uncertainty in model representations. ...
doi:10.3390/biomedicines10061323
fatcat:tv3xewlrsrgbdeib2q6mguhoxq
Parameter Decoupling Strategy for Semi-supervised 3D Left Atrium Segmentation
[article]
2021
arXiv
pre-print
Based on this, the feature extractor is constrained to encourage the consistency of probability maps generated by classifiers under diversified features. ...
In the overall training process, the parameters of feature extractor and classifiers are updated alternately by consistency regularization operation and decoupling operation to gradually improve the generalization ...
At the same time, under the influence of decoupling loss ℒ , the classifiers become orthogonal. ...
arXiv:2109.09596v2
fatcat:mwbm5bpaqjeg7lkswbwzvttiwi
Risk metrics of loss function for uncertain system
2012
Fuzzy Optimization and Decision Making
Real-life decisions are usually made in the state of uncertainty or risk. In this article we present two types of risk metrics of loss function for uncertain system. ...
Firstly, the concept of value at risk (VaR) of loss function is introduced based on uncertainty theory and its fundamental properties are examined. ...
Conclusions In this paper, we mainly introduced two types of risk metrics, the VaR and TVaR, of loss function, which can be used as quantitative risk techniques under uncertainty. ...
doi:10.1007/s10700-012-9146-5
fatcat:fhtvqrbewbeorbjo3eeihrpv7u
Online Continual Learning via Multiple Deep Metric Learning and Uncertainty-guided Episodic Memory Replay – 3rd Place Solution for ICCV 2021 Workshop SSLAD Track 3A Continual Object Classification
[article]
2021
arXiv
pre-print
We also store some samples under guidance of uncertainty metric for rehearsal and perform online and periodical memory updates. ...
Moreover, we exploit modified class-balanced focal loss for sensitive penalization in class imbalanced and hard-easy samples. ...
. • Online and periodic sampling strategy under guidance of uncertainty measure for replay buffer. • Classification task learning with soft labels retrospection and modified class balanced focal loss for ...
arXiv:2111.02757v1
fatcat:73vcgmg23ngwbkzkado5rmlv7e
Striking the Right Balance with Uncertainty
[article]
2019
arXiv
pre-print
Our proposed approach efficiently utilizes sample and class uncertainty information to learn robust features and more generalizable classifiers. ...
We systematically study the class imbalance problem and derive a novel loss formulation for max-margin learning based on Bayesian uncertainty measure. ...
As illustrated in Fig. 3 , under-represented classes in the training set lead to higher uncertainty and bigger confidence intervals. ...
arXiv:1901.07590v3
fatcat:6znury3kg5hkppg3q7lx3vjqgy
Uncertainty-Aware Credit Card Fraud Detection Using Deep Learning
[article]
2021
arXiv
pre-print
Moreover, to evaluate the predictive uncertainty estimates, UQ confusion matrix and several performance metrics are utilized. ...
Through experimental results, we show that the ensemble is more effective in capturing uncertainty corresponding to generated predictions. ...
losses [26] . ...
arXiv:2107.13508v1
fatcat:uxgsee3h7vhidgyqteaiscrala
« Previous
Showing results 1 — 15 out of 74,597 results