Filters








128 Hits in 8.8 sec

Multi-class SVM optimization using MCE training with application to topic identification

Timothy J. Hazen
2010 2010 IEEE International Conference on Acoustics, Speech and Signal Processing  
This paper presents a minimum classification error (MCE) training approach for improving the accuracy of multi-class support vector machine (SVM) classifiers.  ...  The new approach yields improved performance over the traditional techniques for training multi-class SVM classifiers on this task.  ...  MCE Training of SVM Parameters To optimize our multi-class SVM system, we use the minimum classification error (MCE) training approach [12] .  ... 
doi:10.1109/icassp.2010.5494948 dblp:conf/icassp/Hazen10 fatcat:si2jng47kjh2jiuiai5nzac64e

MCE Training Techniques for Topic Identification of Spoken Audio Documents

Timothy J. Hazen
2011 IEEE Transactions on Audio, Speech, and Language Processing  
In this paper, we discuss the use of minimum classification error (MCE) training as a means for improving traditional approaches to topic identification such as naive Bayes classifiers and support vector  ...  Sizeable improvements in topic identification accuracy using the new MCE training techniques were observed.  ...  Richardson and A. Margolis for their contributions to the early development of the techniques presented in this paper.  ... 
doi:10.1109/tasl.2011.2139207 fatcat:nksrvp45arf4db4cgvuc4kdbdq

Modified minimum classification error learning and its application to neural networks [chapter]

Hiroshi Shimodaira, Jun Rokui, Mitsuru Nakai
1998 Lecture Notes in Computer Science  
A novel method to improve the generalization performance of the Minimum Classification Error (MCE) / Generalized Probabilistic Descent (GPD) learning is proposed.  ...  In the present study, a regularization technique has been employed to the MCE learning to overcome this problem.  ...  Kanad Keeni for the discussion on training neural networks.  ... 
doi:10.1007/bfb0033303 fatcat:55z26adtxjfqlh3dvapoq7o62a

Prototype learning with margin-based conditional log-likelihood loss

Xiaobo Jin, Cheng-Lin, Liu Xinwen Hou
2008 Pattern Recognition (ICPR), Proceedings of the International Conference on  
A regularization term is added to avoid over-fitting in train- ing. The CLL loss in LOGM is a convex function of margin, and so, gives better convergence than the MCE algorithm.  ...  The classification performance of nearest prototype classifiers largely relies on the prototype learning algorithms, such as the learning vector quantization (LVQ) and the minimum classification error  ...  These algorithms include the minimum classification error (MCE) method [3] , the generalized LVQ (GLVQ) [7] , the maximum class probability (MAXP) method [5] , the soft nearest prototype classifier  ... 
doi:10.1109/icpr.2008.4760953 dblp:conf/icpr/JinLH08 fatcat:r2wcusanyffp5jntedb6ad2jiq

Regularized margin-based conditional log-likelihood loss for prototype learning

Xiao-Bo Jin, Cheng-Lin Liu, Xinwen Hou
2010 Pattern Recognition  
The minimum classification error (MCE) method and the soft nearest prototype classifier (SNPC) method are two important algorithms using misclassification loss.  ...  The CLL in the LOGM algorithm is a convex function of margin, and so, shows better convergence than the MCE.  ...  Minimum Classification Error (MCE) method For a pattern x n from genuine class k (class label), its discriminant function to a class l is given by g l ðx n Þ ¼ max s fÀJx n Àm ls J 2 g; ð4Þ where J Á J  ... 
doi:10.1016/j.patcog.2010.01.013 fatcat:jh3hsxcwvrc4dacsl54eoyxe6u

Discriminative Training for direct minimization of deletion, insertion and substitution errors

Sunghwan Shin, Ho-Young Jung, Biing-Hwang Juang
2011 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
training criteria, minimum deletion error (MDE), minimum insertion error (MIE), and minimum substitution error (MSE), of which each objective function can directly minimize each of the three types of  ...  Furthermore, a simple combination of the individual objective criteria outperforms the conventional string-based MCE in the overall recognition error rate.  ...  weights for type I and type II errors, respectively and ݈ሺήሻ is a smoothed loss function normally defined by a sigmoid function [2] .  ... 
doi:10.1109/icassp.2011.5947561 dblp:conf/icassp/ShinJJ11 fatcat:le3h6tzs5rhhfoqc5du73xx5mu

Discriminative likelihood score weighting based on acoustic-phonetic classification for speaker identification

Youngjoo Suh, Hoirin Kim
2014 EURASIP Journal on Advances in Signal Processing  
In this paper, a new discriminative likelihood score weighting technique is proposed for speaker identification.  ...  The proposed method employs a discriminative weighting of frame-level log-likelihood scores with acoustic-phonetic classification in the Gaussian mixture model (GMM)-based speaker identification.  ...  A loss function for approximating the empirical loss related to the soft count of classification errors is defined as l k X; Φ W ð Þ¼ 1 1 þ exp −γd k ðX; Φ W ð Þ Þ ; ð11Þ where γ is a positive constant  ... 
doi:10.1186/1687-6180-2014-126 fatcat:4znlsd5475gr3ebugja7wcy6di

Partial discriminative training for classification of overlapping classes in document analysis

Cheng-Lin Liu
2008 International Journal on Document Analysis and Recognition  
For such classification problems, this paper proposes a partial discriminative training (PDT) scheme, in which, a training pattern of an overlapping class is used as a positive sample of its labeled class  ...  For classification of such overlapping classes, either discriminating between them or merging them into a metaclass does not satisfy.  ...  The classification error (loss) l c (x) on a training pattern x with genuine class ω c is approximated by the sigmoidal function: l c (x) = σ (d c ) = 1 1 + e −ξ d c . (8) where ξ is a constant to control  ... 
doi:10.1007/s10032-008-0069-1 fatcat:abrckdk22fbqnkhfmdrcnh454i

A maximal figure-of-merit learning approach to text categorization

Sheng Gao, Wen Wu, Chin-Hui Lee, Tat-Seng Chua
2003 Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval - SIGIR '03  
To solve this highly nonlinear optimization problem, we use a generalized probabilistic descent algorithm.  ...  Other extensions to design discriminative multiple-category MFoM classifiers for application scenarios with new performance metrics could be envisioned too.  ...  We anticipate more future work on MFoM learning, including a comparative study on the evaluation of different performance metrics using different training objectives on individual classes and the overall  ... 
doi:10.1145/860468.860469 fatcat:wjy4qen4tjbdbdaoiwmspvkyna

A maximal figure-of-merit learning approach to text categorization

Sheng Gao, Wen Wu, Chin-Hui Lee, Tat-Seng Chua
2003 Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval - SIGIR '03  
To solve this highly nonlinear optimization problem, we use a generalized probabilistic descent algorithm.  ...  Other extensions to design discriminative multiple-category MFoM classifiers for application scenarios with new performance metrics could be envisioned too.  ...  We anticipate more future work on MFoM learning, including a comparative study on the evaluation of different performance metrics using different training objectives on individual classes and the overall  ... 
doi:10.1145/860435.860469 dblp:conf/sigir/GaoWLC03 fatcat:x3rjjai5s5a5heh57edq3t6sqq

Minimizing Sequential Confusion Error in Speech Command Recognition [article]

Zhanheng Yang, Hang Lv, Xiong Wang, Ao Zhang, Lei Xie
2022 arXiv   pre-print
In this paper, inspired by the advances of discriminative training in speech recognition, we propose a novel minimize sequential confusion error (MSCE) training criterion particularly for SCR, aiming to  ...  During training, we propose several strategies to use prior knowledge creating a confusing sequence set for similar-sounding command instead of creating the whole non-target command set, which can better  ...  Minimum Classification Error Generally speaking, the MCE criterion [22] is defined in terms of discriminative functions for each category is optimized.  ... 
arXiv:2207.01261v1 fatcat:larjrkh5mfeopdvhpv7czyd7hi

Fuzzy Knowledge Based GIS for Zonation of Landslide Susceptibility [chapter]

J. K. Ghosh, Devanjan Bhattacharya, Swej Kumar Sharma
2012 Understanding Complex Systems  
These problems are removed with the advent of satellite data and processing using geographical information system (GIS).  ...  Landslides cause loss of life and property, damage to natural resources, and hamper developmental projects like roads, dams, communication lines, bridges etc.  ...  An error matrix has been used to evaluate classification accuracy (Table 2 .3). An unbiased set of test samples has been chosen from the reference data for assessment of classification accuracy.  ... 
doi:10.1007/978-3-642-29329-0_2 fatcat:rkz52khmufgvvmdkao5kwe5gmy

Hierarchical large-margin Gaussian mixture models for phonetic classification

Hung-An Chang, James R. Glass
2007 2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU)  
classification error (MCE) training.  ...  Experiments are performed on a standard phonetic classification task and a large vocabulary speech recognition (LVCSR) task.  ...  In the past ten years, several objective functions such as maximum mutual information (MMI) [38] training, minimum classification error (MCE) [17] , and minimum word/phone error (MWE/MPE) [29] training  ... 
doi:10.1109/asru.2007.4430123 dblp:conf/asru/ChangG07 fatcat:yreg3f23pvbopff3mchwedt4du

An application of minimum classification error to feature space transformations for speech recognition

Ángel de la Torre, Antonio M. Peinado, Antonio J. Rubio, Victoria E. Sánchez, Jesús E. Díaz
1996 Speech Communication  
The use of signal transformations is a necessary step for feature extraction  ...  In this paper we propose a new method to obtain feature space transformations based on the Minimum Classification Error criterion.  ...  The most used cost function 1, is defined as a sigmoid function of an error measure, d,n( X,): 1 UX,) = 1 + e-adIn ' ( 14 ) where (Y is the transition parameter from correct to incorrect classification  ... 
doi:10.1016/s0167-6393(96)00061-1 fatcat:pmsd2va4zfas3oqiue62ux6wdm

Evaluation of multi-level context-dependent acoustic model for large vocabulary speaker adaptation tasks

Hung-An Chang, James Glass
2012 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
In this paper, we investigate the ability of a recently proposed discriminatively trained, multi-level context-dependent acoustic model to adapt to a new speaker in both supervised and unsupervised adaptation  ...  Speaker adaptive speech recognition experiments performed on a largevocabulary spoken lecture task show that the multi-level model reduces word error rates by more than 10% in both cases as compared to  ...  Acknowledgements This work is supported by the T-Party Project, a joint research program between MIT and Quanta Computer Inc., Taiwan.  ... 
doi:10.1109/icassp.2012.6288873 dblp:conf/icassp/ChangG12 fatcat:aoystkioabgy5oyahneomdbub4
« Previous Showing results 1 — 15 out of 128 results