Filters








24,951 Hits in 6.6 sec

A Leave-One-out Cross Validation Bound for Kernel Methods with Applications in Learning [chapter]

Tong Zhang
2001 Lecture Notes in Computer Science  
×´Ü ½ Ý ½ µ ´Ü Ò Ý Ò µº ÈÖÓÓ º Í× Ò Ä ÑÑ ¿ Ò Ø Å Ò ÓÛ× Ò ÕÙ Ð ØÝ¸Û Ú ¯´Ô´ « Ü µ Ý µ × µ ½ × · × Å ¾ Ò ½ × ½ Ø ½ ×´Ò ½ ¯´Ô´ « Ü µ Ý µ × µ × Ø AEÓÛ ÔÔÐÝ Ø Â Ò× Ò³× Ò ÕÙ Ð ØÝ ØÓ Ø × ÓÒ Ø ÖÑ ÓÒ Ø Ö Ø Ò ×  ...  Ø ÓÒ Ø ÓÒ× Ó Ä ÑÑ ½ Û Ø Ú Ò Ý´½½µº Á ½ × ¾¸ Ò Ð Ø É´Ü ½ Ý ½ Ü Ò Ý Ò µ ½ Ò ½ × É´Ô´ « ℄ ¡µµ ´ É´Ü ½ Ý ½ Ü Ò Ý Ò µµ ½ × · × Å ¾´ É´Ü ½ Ý ½ Ü Ò Ý Ò µµ ½ Ø Û Ö ÒÓØ × Ø ÜÔ Ø Ø ÓÒ ÓÚ Ö Ò Ö Ò ÓÑ ØÖ Ò Ò × ÑÔÐ  ... 
doi:10.1007/3-540-44581-1_28 fatcat:irxltk5ibrfnta5dfl2toqo5dm

Efficient approximate leave-one-out cross-validation for kernel logistic regression

Gavin C. Cawley, Nicola L. C. Talbot
2008 Machine Learning  
In this paper, we propose a novel model selection strategy for KLR, based on a computationally efficient closed-form approximation of the leave-one-out crossvalidation procedure.  ...  Results obtained on a variety of synthetic and real-world benchmark datasets are given, demonstrating that the proposed model selection procedure is competitive with a more conventional k-fold cross-validation  ...  We are particularly indebted to Olivier for providing the clever trick (26) for evaluating gradient information more efficiently and (with one of the reviewers) for pointing out the alternate derivation  ... 
doi:10.1007/s10994-008-5055-9 fatcat:lwyqgrujrrbarhbn7p6ud3xdiq

Efficient leave-one-out cross-validation of kernel fisher discriminant classifiers

Gavin C. Cawley, Nicola L.C. Talbot
2003 Pattern Recognition  
Leave-one-out cross-validation then becomes an attractive means of model selection in large-scale applications of kernel Fisher discriminant analysis, being signiÿcantly faster than conventional k-fold  ...  We show that leave-one-out cross-validation of kernel Fisher discriminant classiÿers can be implemented with a computational complexity of only O(' 3 ) operations rather than the O(' 4 ) of a na ve implementation  ...  cross-validation.  ... 
doi:10.1016/s0031-3203(03)00136-5 fatcat:dzuueoby6vamtnv6nv2b4hi2aq

Evolutionary Kernel Learning [chapter]

John Langford, Xinhua Zhang, Gavin Brown, Indrajit Bhattacharya, Lise Getoor, Thomas Zeugmann, Thomas Zeugmann, Ljupčo Todorovski, Kai Ming Ting, David Corne, Julia Handl, Joshua Knowles (+23 others)
2011 Encyclopedia of Machine Learning  
Definition Evolutionary kernel learning stands for using evolutionary algorithms to optimize the kernel function for a kernel-based learning machine.  ...  Leave-one-out error Synonyms hold-one-out error, LOO error Definition Given a data set of patterns, the LOO error is the -fold cross-validation error.  ...  For hard-margin SVMs, the number of support vectors (SVs) is an upper bound on the expected number of errors made by the leave-one-out procedure (e.g., see [1]).  ... 
doi:10.1007/978-0-387-30164-8_284 fatcat:n757tyeoevcorg7vu7stvrblyq

Evaluation of Performance Measures for SVR Hyperparameter Selection

Koen Smets, Brigitte Verdonk, Elsa M. Jordaan
2007 Neural Networks (IJCNN), International Joint Conference on  
In this study, we empirically investigate different performance measures found in the literature: k-fold cross-validation, the computationally intensive, but almost unbiased leave-oneout error, its upper  ...  For each of the estimates we focus on accuracy, complexity and the presence of local minima.  ...  Both k-fold cross-validation and LOO are applicable to arbitrary learning algorithms.  ... 
doi:10.1109/ijcnn.2007.4371031 dblp:conf/ijcnn/SmetsVJ07 fatcat:xvg4a7bdurgszdvszpx55nsu7e

Feature Scaling for Kernel Fisher Discriminant Analysis Using Leave-One-Out Cross Validation

Liefeng Bo, Ling Wang, Licheng Jiao
2006 Neural Computation  
The proposed algorithm is based on optimizing the smooth leave-one-out error via a gradient descent method and has been demonstrated to be computationally feasible.  ...  In this letter, we focus on the feature scaling kernel where each feature individually associates with a scaling factor.  ...  Acknowledgments We give thanks to two reviewers for their helpful comments that greatly improved the paper and to Lin Shi for her help in proofreading the manuscript.  ... 
doi:10.1162/neco.2006.18.4.961 pmid:16494697 fatcat:gu4rndw4treh5fekvr6zzmqgi4

Feature Scaling for Kernel Fisher Discriminant Analysis Using Leave-One-Out Cross Validation

Liefeng Bo, Ling Wang, Licheng Jiao
2006 Neural Computation  
The proposed algorithm is based on optimizing the smooth leave-one-out error via a gradient descent method and has been demonstrated to be computationally feasible.  ...  In this letter, we focus on the feature scaling kernel where each feature individually associates with a scaling factor.  ...  Acknowledgments We give thanks to two reviewers for their helpful comments that greatly improved the paper and to Lin Shi for her help in proofreading the manuscript.  ... 
doi:10.1162/089976606775774642 pmid:16494697 fatcat:i6ebizolnbglddbqsnujf2qnnu

Efficient Pairwise Learning Using Kernel Ridge Regression: an Exact Two-Step Method [article]

Michiel Stock and Tapio Pahikkala and Antti Airola and Bernard De Baets and Willem Waegeman
2016 arXiv   pre-print
In this work we analyze kernel-based methods for pairwise learning, with a particular focus on a recently-suggested two-step method.  ...  In addition, the two-step method allows us to establish novel algorithmic shortcuts for efficient training and validation on very large datasets.  ...  Acknowledgements Part of this work was carried out using the Stevin Supercomputer Infrastructure at Ghent University, funded by Ghent University, the Hercules Foundation and the Flemish Government -department  ... 
arXiv:1606.04275v1 fatcat:am3yswdiqvcozjkkrjilopagia

Kernel learning at the first level of inference

Gavin C. Cawley, Nicola L.C. Talbot
2014 Neural Networks  
Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds.  ...  Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel  ...  The research presented in this paper was carried out on the High Performance Computing Cluster supported by the Research and Specialist Computing Support service at the University of East Anglia.  ... 
doi:10.1016/j.neunet.2014.01.011 pmid:24561452 fatcat:iszwkmrzabg57eksbfcmyruz7m

Bandwidth Selection Problem in Nonparametric Functional Regression

Daniela Kuruczová, Jan Koláček
2017 Statistika: Statistics and Economy Journal  
The performance of these methods is illustrated in a real data application. A conclusion is drawn that local bandwidth selection methods are more appropriate in the functional setting.  ...  Functional kernel regression belongs to popular nonparametric methods used for this purpose.  ...  The local cross-validation function uses the leave-one-out regression estimate along with a weight function dependent on data point x.  ... 
doaj:0e92f666c40f48958186bc6bc00ed473 fatcat:c3s6avzjhzf2lj45wwvlwkyy6a

Chunking with Support Vector Machines
Support Vector Machineを用いたChunk同定

TAKU KUDO, YUJI MATSUMOTO
2002 Journal of Natural Language Processing  
Furthermore, by the Kernel principle, SVMs can carry out training with smaller computational overhead independent of their dimensionality.  ...  SVMs are known to achieve high generalization performance even with input data of high dimensional feature spaces.  ...  Furthermore, we achieve higher accuracy by applying Cross validation and VC-bound and Leave-One-Out methods than the baseline method.  ... 
doi:10.5715/jnlp.9.5_3 fatcat:gq5h324bljbq5lybkqmojc77au

Leave Zero Out: Towards a No-Cross-Validation Approach for Model Selection [article]

Weikai Li, Chuanxing Geng, Songcan Chen
2020 arXiv   pre-print
In addition, the proposed validation approach is suitable for a wide range of learning settings due to the independence of both augmentation and out-of-sample estimation on learning process.  ...  On the one hand, for small data cases, CV suffers a conservatively biased estimation, since some part of the limited data has to hold out for validation.  ...  For linear-fitting-based models under the squared-error loss, generalized cross-validation provides a convenient approximation to leave-one-out cross-validation based on the trace of the smoothing matrix  ... 
arXiv:2012.13309v2 fatcat:5prgibbgrzajncr6oocona6ose

Classification of contamination in salt marsh plants using hyperspectral reflectance

M.D. Wilson, S.L. Ustin, D.M. Rocke
2004 IEEE Transactions on Geoscience and Remote Sensing  
The statistic we used to compare the effectiveness of the methodologies was the leave-one-out cross-validation estimate of the prediction error.  ...  in four experiments) that have been exposed to varying levels of different heavy metal or petroleum toxicity, with a control treatment for each experiment.  ...  Cadwell for help with the greenhouse work and P. Zarco-Tejada for help with the remote sensing technologies.  ... 
doi:10.1109/tgrs.2003.823278 fatcat:vzc5b2wejjbdnhfhmiaoem7y4a

Effects of Reduced Precision on Floating-Point SVM Classification Accuracy

Bernd Lesser, Manfred Mücke, Wilfried N. Gansterer
2011 Procedia Computer Science  
Furthermore, we demonstrate analytic bounds on the working precision for SVMs with Gaussian kernel providing good predictions of possible reductions in the working precision without sacrificing classification  ...  There is growing interest in performing ever more complex classification tasks on mobile and embedded devices in real-time, which results in the need for efficient implementations of the respective algorithms  ...  E2 -Data Perturbation (leave-one-out cross validation) Using leave-one-out cross validation the SVM is trained n times.  ... 
doi:10.1016/j.procs.2011.04.053 fatcat:wiuoh6o7pne5vikxnadk46pmmq

Optimally regularised kernel Fisher discriminant analysis

K. Saadi, N.L.C. Talbot, G.C. Cawley
2004 Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.  
In this paper, we extend an existing analytical expression for the leave-one-out cross-validation error [2] such that the leave-one-out error can be re-estimated following a change in the value of the  ...  Results obtained on real-world and synthetic benchmark datasets indicate that the proposed method is competitive with model selection based on k-fold cross-validation in terms of generalisation, whilst  ...  Acknowledgements We thank the anonymous reviewers for their helpful and constructive com-  ... 
doi:10.1109/icpr.2004.1334245 dblp:conf/icpr/SaadiTC04 fatcat:os77bbidyrfano7vjkpjzoyumq
« Previous Showing results 1 — 15 out of 24,951 results