215,683 Hits in 5.7 sec

A New Data Selection Principle for Semi-Supervised Incremental Learning

Rong Zhang, A.I. Rudnicky
2006 18th International Conference on Pattern Recognition (ICPR'06)  
This is because the confidence score is primarily a metric to measure the classification correctness on a particular example, rather than one to measure the example's contribution to the training of an  ...  To address this problem, we propose a performance-driven principle for unlabeled data selection in which only the unlabeled examples that help to improve classification accuracy are selected for semisupervised  ...  We use a metric that combines two aspects of information, pseudo-accuracy and Pseudo-accuracy. This item considers the classification accuracy of model λ on labeled and unlabeled sets.  ... 
doi:10.1109/icpr.2006.115 dblp:conf/icpr/ZhangR06 fatcat:gxuf6l4hwzeuliestswz5m3aqy

Making better use of accuracy data in land change studies: Estimating accuracy and area and quantifying uncertainty using stratified estimation

Pontus Olofsson, Giles M. Foody, Stephen V. Stehman, Curtis E. Woodcock
2013 Remote Sensing of Environment  
A simple analysis of uncertainty based on the confidence bounds for land change area is applied to a carbon flux model to illustrate numerically that variability in the land change area estimate can have  ...  Accuracy assessments published for land change studies should report the information required to produce the stratified estimator of change area and to construct confidence intervals.  ...  Acknowledgment This research was funded by USGS Award Support for SilvaCarbon and NASA award NNX11AJ79G to Boston University, and USGS Cooperative Agreement G12AC20221 to State University of New York.  ... 
doi:10.1016/j.rse.2012.10.031 fatcat:mxbvz5yeefazfihgiqfvm7lrvq

Using multiple measures to predict confidence in instance classification

Kristine Monteith, Tony Martinez
2010 The 2010 International Joint Conference on Neural Networks (IJCNN)  
These aggregate measures result in higher classification accuracy than using a collection of single confidence estimates.  ...  To address this issue, we present the strategy of Aggregate Confidence Ensembles, which uses multiple measures to estimate a classifier's confidence in its predictions on an instance-by-instance basis.  ...  For example, when compared to the standard voting strategy, Aggregate Confidence Ensembles achieved a higher classification accuracy on twenty-four of the data sets, a lower classification accuracy on  ... 
doi:10.1109/ijcnn.2010.5596550 dblp:conf/ijcnn/MonteithM10 fatcat:6ozibdxktfacxh4xwhaohnvexe


K. S. Cheng, J. Y. Ling, T. W. Lin, Y. T. Liu, Y. C. Shen, Y. Kono
2019 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
In this study we present new concepts of LULC classification accuracies, namely the training-sample-based global accuracy and the classifier global accuracy, and a general expression of different measures  ...  We then propose a bootstrap-simulation approach for establishing 95% confidence intervals of classifier global accuracies.</p>  ...  ACKNOWLEDGEMENTS We acknowledge the financial support from the Ministry of Science and Technology of Taiwan through a project grant (MOST-104-2918-I-002-013).  ... 
doi:10.5194/isprs-archives-xlii-2-w13-1207-2019 fatcat:4vfz7c6hdra4pg3q5dzvmhmcfi

Classification on Data with Biased Class Distribution [chapter]

Slobodan Vucetic, Zoran Obradovic
2001 Lecture Notes in Computer Science  
Then, we propose two methods to improve classification accuracy on new data.  ...  Given an unlabeled new data set we propose a bootstrap method to estimate its class probabilities by using an estimate of the classifier's accuracy on training data and an estimate of probabilities of  ...  Improving Classification Based on Class Probability Estimates Once class probabilities on S new are estimated it should be possible to improve the initial classifier.  ... 
doi:10.1007/3-540-44795-4_45 fatcat:sjyk3zlxxbccxhqmwazn7cvyii

Multiple confidence estimates as indices of eyewitness memory

James D. Sauer, Neil Brewer, Nathan Weber
2008 Journal of experimental psychology. General  
Experiment 1 compared confidence group classification accuracy with a binary decision control group's accuracy on a standard old-new face recognition task and found superior accuracy for the confidence  ...  An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated.  ...  For example, if two lineup members (one of whom was the suspect whereas the other was known to be innocent) received confidence ratings of 90%, whereas all other lineup members received confidence estimates  ... 
doi:10.1037/a0012712 pmid:18729714 fatcat:xv5m3s5m2rfnjj3me5dksfre7q

Anticipating Students' Failure As Soon As Possible [chapter]

Cl√°udia Antunes
2010 Handbook of Educational Data Mining  
Experimental results show that the accuracy of these new methods is very promising, when compared with classifiers trained with smaller datasets.  ...  Despite the increase of interest on education and the quantity of existing data about students' behaviors, neither of them, per se, are enough to predict when some student will fail.  ...  Figure 6 - 6 Accuracy, sensibility and specificity for different levels of support and confidence (for a minimum CAR accuracy of 50%) Figure 7 - 7 Impact of support and confidence on the estimation of  ... 
doi:10.1201/b10274-28 fatcat:cn5k44rvtrdkbaekpnco7xnqxm

Towards Consistent Predictive Confidence through Fitted Ensembles [article]

Navid Kardan, Ankit Sharma, Kenneth O. Stanley
2021 arXiv   pre-print
Furthermore, we present a new strong baseline for more consistent predictive confidence in deep models, called fitted ensembles, where overconfident predictions are rectified by transformed versions of  ...  Experiments on MNIST, SVHN, CIFAR-10/100, and ImageNet show fitted ensemble significantly outperform conventional ensembles on OOD examples and are possible to scale.  ...  TABLE I : I Classification accuracy of ensembles of five CNNs and fitted ensembles on various data sets.  ... 
arXiv:2106.12070v1 fatcat:cfo26nl62nhanimilp2oryg5iq

Automatic Webpage Classification Enhanced by Unlabeled Data [chapter]

Seong-Bae Park, Byoung-Tak Zhang
2003 Lecture Notes in Computer Science  
By taking advantage of unlabeled data, the effective number of labeled data needed is significantly reduced and the classification accuracy is increased.  ...  The proposed method is based on a sequential learning of the classifiers which are trained on a small number of labeled data and then augmented by a large number of unlabeled data.  ...  This research was supported by the Korean Ministry of Education under the BK21-IT Program, by BrainTech programs sponsored by the Korean Ministry of Science and Technology.  ... 
doi:10.1007/978-3-540-45080-1_113 fatcat:x7lvkqpda5aq3auoygjkrujhxq

Methodology Multiclass microarray data classification based on confidence evaluation

H.L. Yu, S. Gao, B. Qin, J. Zhao
2012 Genetics and Molecular Research  
approach was tested on seven benchmark multiclass microarray datasets, with encouraging results, demonstrating effectiveness and feasibility.  ...  samples based on different classification confidence.  ...  Figure 3 . 3 Schema of the classification confidence evaluation based on one versus rest-support vector machine (OVR-SVM) strategy.  ... 
doi:10.4238/2012.may.15.6 pmid:22653582 fatcat:rypqu6xbjrfevnpopugx3lj47e

Beyond Cross-Validation—Accuracy Estimation for Incremental and Active Learning Models

Christian Limberg, Heiko Wersing, Helge Ritter
2020 Machine Learning and Knowledge Extraction  
By calculating classification confidences for unseen samples, it is possible to train an offline regression model, capable of predicting the classifier's accuracy on novel data in a semi-supervised fashion  ...  We introduce the Configram Estimation (CGEM) approach to predict the accuracy of any classifier that delivers confidences.  ...  It requires a classification of each new sample before using it for training. The classifier's accuracy is then estimated by averaging over a window of past classifications.  ... 
doi:10.3390/make2030018 fatcat:4hepw5mbtzc47nfwh4rzais7ge

Dynamic classifier ensemble using classification confidence

Leijun Li, Bo Zou, Qinghua Hu, Xiangqian Wu, Daren Yu
2013 Neurocomputing  
It dynamically selects a subset of classifiers for test samples according to classification confidence.  ...  The weights of base classifiers are learned by optimization of margin distribution on the training set, and the ordered aggregation technique is exploited to estimate the size of an appropriate subset.  ...  Acknowledgments This work is supported by National Natural Science Foundation of China under Grant 61222210, 61170107, 60873140, 61073125 and 61071179, the Program for New Century Excellent Talents in  ... 
doi:10.1016/j.neucom.2012.07.026 fatcat:6ygpuinbpvbbpo26wvuvphky2u

AC-Stream: Associative classification over data streams using multiple class association rules

Bordin Saengthongloun, Thanapat Kangkachit, Thanawin Rakthanmanon, Kitsana Waiyamai
2013 The 2013 10th International Joint Conference on Computer Science and Software Engineering (JCSSE)  
Data stream classification is one of the most interesting problems in the data mining community. Recently, the idea of associative classification was introduced to handle data streams.  ...  Compared to AC-DS and other traditional associative classifiers on large number of UCI datasets, AC-Stream is more effective in terms of average accuracy and F1 measurement.  ...  In [5] , AC-DS is proposed as a new associative classification algorithm for data streams. AC-DS is based on the estimation of support threshold and a landmark window model.  ... 
doi:10.1109/jcsse.2013.6567349 fatcat:tquuddwd4rbbpc5jij37gubnci

A New Classification Approach Based on Multiple Classification Rules

Zhongmei Zhou
2014 Mathematical Problems in Engineering  
It is difficult to select a high quality rule set for classification. Second, the accuracy of associative classification depends on the setting of the minimum support and the minimum confidence.  ...  In this paper, we put forward a new classification approach called CMR (classification based on multiple classification rules).  ...  Second, the accuracy of associative classification depends on the setting of the minimum support and the minimum confidence.  ... 
doi:10.1155/2014/818253 fatcat:ci4m6t37bjf4pldxni5c2q5ojq

Learning a Stopping Criterion for Active Learning for Word Sense Disambiguation and Text Classification

Jingbo Zhu, Huizhen Wang, Eduard H. Hovy
2008 International Joint Conference on Natural Language Processing  
We propose a new statistical learning approach, called minimum expected error strategy, to defining a stopping criterion through estimation of the classifier's expected error on future unlabeled examples  ...  costs in word sense disambiguation with degradation of 0.5% average accuracy, and approximately 90% costs in text classification with degradation of 2% average accuracy.  ...  through estimation of the classifier's expected error on future unlabeled examples.  ... 
dblp:conf/ijcnlp/ZhuWH08 fatcat:2hy5crohhncwpnbizxszdflthy
« Previous Showing results 1 — 15 out of 215,683 results