An Improved Boundary Uncertainty-Based Estimation for Classifier Evaluation release_4f4binz4hvbilpobsrkuwunv4y

by David Ha, Shigeru Katagiri, Hideyuki Watanabe, Miho Ohsaki

Published in Journal of Signal Processing Systems by Springer Science and Business Media LLC.

2021  

Abstract

<jats:title>Abstract</jats:title>This paper proposes a new boundary uncertainty-based estimation method that has significantly higher accuracy, scalability, and applicability than our previously proposed boundary uncertainty estimation method. In our previous work, we introduced a new classifier evaluation metric that we termed "boundary uncertainty." The name "boundary uncertainty" comes from evaluating the classifier based solely on measuring the equality between class posterior probabilities along the classifier boundary; satisfaction of such equality can be described as "uncertainty" along the classifier boundary. We also introduced a method to estimate this new evaluation metric. By focusing solely on the classifier boundary to evaluate its uncertainty, boundary uncertainty defines an easier estimation target that can be accurately estimated based directly on a finite training set without using a validation set. Regardless of the dataset, boundary uncertainty is defined between 0 and 1, where 1 indicates whether probability estimation for the Bayes error is achieved. We call our previous boundary uncertainty estimation method "Proposal 1" in order to contrast it with the new method introduced in this paper, which we call "Proposal 2." Using Proposal 1, we performed successful classifier evaluation on real-world data and supported it with theoretical analysis. However, Proposal 1 suffered from accuracy, scalability, and applicability limitations owing to the difficulty of finding the location of a classifier boundary in a multidimensional sample space. The novelty of Proposal 2 is that it locally reformalizes boundary uncertainty in a single dimension that focuses on the classifier boundary. This convenient reduction with a focus toward the classifier boundary provides the new method's significant improvements. In classifier evaluation experiments on Support Vector Machines (SVM) and MultiLayer Perceptron (MLP), we demonstrate that Proposal 2 offers a competitive classifier evaluation accuracy compared to a benchmark Cross Validation (CV) method as well as much higher scalability than both CV and Proposal 1.
In application/xml+jats format

Archived Files and Locations

application/pdf   3.5 MB
file_ditcgw3zevhzphquaruwd4xuuq
link.springer.com (publisher)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article-journal
Stage   published
Date   2021-06-10
Language   en ?
Journal Metadata
Not in DOAJ
In Keepers Registry
ISSN-L:  1939-8115
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 825efae1-1296-4d87-9d44-42e86db8d955
API URL: JSON