Filters








440 Hits in 3.1 sec

Augmented Kernel Matrix vs Classifier Fusion for Object Recognition

Muhammad Awais, Fei Yan, Krystian Mikolajczyk, Josef Kittler
2011 Procedings of the British Machine Vision Conference 2011  
Augmented Kernel Matrix (AKM) has recently been proposed to accommodate for the fact that a single training example may have different importance in different feature spaces, in contrast to Multiple Kernel  ...  Learning (MKL) that assigns the same weight to all examples in one feature space.  ...  This research was supported by UK EPSRC EP/F0034 20/1, EP/F0694 21/1 and the BBC R&D grants.  ... 
doi:10.5244/c.25.60 dblp:conf/bmvc/AwaisYMK11 fatcat:jlfcxb26oncf3jmbsqbbpbb4um

Multiple Kernel Learning in the Primal for Multi-modal Alzheimer's Disease Classification [article]

Fayao Liu, Luping Zhou, Chunhua Shen, Jianping Yin
2013 arXiv   pre-print
In this work, we propose a novel multiple kernel learning framework to combine multi-modal features for AD classification, which is scalable and easy to implement.  ...  By conducting Fourier transform on the Gaussian kernel, we explicitly compute the mapping function, which leads to a more straightforward solution of the problem in the primal space.  ...  The process of learning the kernel weights while simultaneously minimizing the structural risk is known as the multiple kernel learning (MKL).  ... 
arXiv:1310.0890v1 fatcat:z6l2gd3nc5gabmwicviqie4z4e

Multiple Kernel Learning in the Primal for Multimodal Alzheimer's Disease Classification

Fayao Liu, Luping Zhou, Chunhua Shen, Jianping Yin
2014 IEEE journal of biomedical and health informatics  
In this work, we propose a novel multiple kernel learning framework to combine multi-modal features for AD classification, which is scalable and easy to implement.  ...  By conducting Fourier transform on the Gaussian kernel, we explicitly compute the mapping function, which leads to a more straightforward solution of the problem in the primal.  ...  The process of learning the kernel weights while simultaneously minimizing the structural risk is known as the multiple kernel learning (MKL).  ... 
doi:10.1109/jbhi.2013.2285378 pmid:24132030 fatcat:er2i773g2vfzpoeirs4vojozma

Online-batch strongly convex Multi Kernel Learning

Francesco Orabona, Luo Jie, Barbara Caputo
2010 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition  
Here we present a Multiclass Multi Kernel Learning (MKL) algorithm that obtains state-of-the-art performance in a considerably lower training time.  ...  Thanks to this new setting, we can directly solve the problem in the primal formulation.  ...  Acknowledgments The kernel matrixes of Caltech-101 were kindly provided by Peter Gehler, who we also thank for his useful comments. This work was sponsored by the EU project DIRAC IST-027787.  ... 
doi:10.1109/cvpr.2010.5540137 dblp:conf/cvpr/OrabonaJC10 fatcat:fx23gqyq3ffgnkpduhkkt4rpkq

AdaMKL: A Novel Biconvex Multiple Kernel Learning Approach

Ziming Zhang, Ze-Nian Li, Mark S. Drew
2010 2010 20th International Conference on Pattern Recognition  
In this paper, we propose a novel large-margin based approach for multiple kernel learning (MKL) using biconvex optimization, called Adaptive Multiple Kernel Learning (AdaMKL).  ...  To learn the weights for support vectors and the kernel coefficients, AdaMKL minimizes the objective function alternatively by learning one component while fixing the other at a time, and in this way only  ...  Adaptive Multiple Kernel Learning In this paper, we focus on the binary classification using AdaMKL. We follow the notations in Section 1.  ... 
doi:10.1109/icpr.2010.521 dblp:conf/icpr/ZhangLD10 fatcat:xxp6fpu6zveu7jgsmbunke5dnu

Group Based Localized Multiple Kernel Learning Algorithm with lp-Norm

Guangyuan Fu, Qingchao Wang, Dongying Bai, Linlin Li
2016 International Journal of Innovative Computing, Information and Control  
Because the sparse constraint may lose useful kernels, we use an lp-norm constraint on the kernels and obtain non-sparse results to avoid losing useful kernels.  ...  In this paper, we proposed a groupbased non-sparse localized multiple kernel learning algorithm to tackle the issues above.  ...  The authors also gratefully acknowledge the helpful comments and suggestion of the reviewers, which have improved the presentation.  ... 
doi:10.24507/ijicic.12.06.1835 fatcat:3fkad4wyqzddposs33tej2lmna

RV-SVM: An Efficient Method for Learning Ranking SVM [chapter]

Hwanjo Yu, Youngdae Kim, Seungwon Hwang
2009 Lecture Notes in Computer Science  
In this paper, we first develop a 1-norm ranking SVM that is faster in testing than the standard ranking SVM, and propose Ranking Vector SVM (RV-SVM) that revises the 1-norm ranking SVM for faster training  ...  We experimentally compared the RV-SVM with the state-of-the-art rank learning method provided in SVM-light.  ...  We then develop the Ranking Vector SVM (RV-SVM) which uses as less support vectors as the 1-norm SVM and trains much faster than the standard 2-norm SVM.  ... 
doi:10.1007/978-3-642-01307-2_39 fatcat:mhpiqaljuraejflgazodtuafz4

Linear Programming SVM-ARMA$_{\rm 2K}$ With Application in Engine System Identification

Zhao Lu, Jing Sun, Kenneth Butts
2011 IEEE Transactions on Automation Science and Engineering  
In particular, the possible generalization of LP-SVM-ARMA 2K via more complex composite kernel functions is also discussed to meet the diversity of industrial practice.  ...  Inspired by the triumphs of support vector learning methodology in pattern recognition and regression analysis, an innovational nonlinear systems identification algorithm, LP-SVM-ARMA 2K was developed  ...  KERNEL where In our simulation, the subsystems (26)-(30) are learned by LP-SVR and QP-SVR, respectively.  ... 
doi:10.1109/tase.2011.2140105 fatcat:ak7twh24mnffnein6cnaylqqwu

NESVM: a Fast Gradient Method for Support Vector Machines [article]

Tianyi Zhou, Dacheng Tao, Xindong Wu
2010 arXiv   pre-print
In particular, NESVM smoothes the non-differentiable hinge loss and ℓ_1-norm in the primal SVM. Then the optimal gradient method without any line search is adopted to solve the optimization.  ...  In addition, NESVM is available for both linear and nonlinear kernels.  ...  Smooth the ℓ 1 -norm In LP-SVM, the regularizer is defined by the sum of all ℓ 1 -norm ℓ(w i ) = |w i |, i.e., R(w) = p i=1 ℓ(w i ).  ... 
arXiv:1008.4000v1 fatcat:puci77wkhfhv7caiuwzsvtdnfy

NESVM: A Fast Gradient Method for Support Vector Machines

Tianyi Zhou, Dacheng Tao, Xindong Wu
2010 2010 IEEE International Conference on Data Mining  
In particular, NESVM smoothes the nondifferentiable hinge loss and 1-norm in the primal SVM. Then the optimal gradient method without any line search is adopted to solve the optimization.  ...  In addition, NESVM is available for both linear and nonlinear kernels.  ...  Smooth the 1 -norm In LP-SVM, the regularizer is defined by the sum of all 1 -norm (w i ) = |w i |, i.e., R(w) = p i=1 (w i ).  ... 
doi:10.1109/icdm.2010.135 dblp:conf/icdm/ZhouTW10 fatcat:czrtkottbnh6fjpr3ucjk2445e

From Kernel Machines to Ensemble Learning [article]

Chunhua Shen, Fayao Liu
2014 arXiv   pre-print
Here we propose a principled framework for directly constructing ensemble learning methods from kernel methods.  ...  In other words, it is possible to design ensemble methods directly from SVM without any middle procedure.  ...  in the primal objective of f lp . 2) The constraint of nonnegative w lead to the dual inequality constraint.  ... 
arXiv:1401.0767v1 fatcat:extsje6t4rdjdco2isrjle27we

Building Sparse Multiple-Kernel SVM Classifiers

Mingqing Hu, Yiqiang Chen, J.T.-Y. Kwok
2009 IEEE Transactions on Neural Networks  
In this paper, we further extend this idea by integrating with techniques from multiple-kernel learning (MKL).  ...  The kernel function in this sparse SVM formulation no longer needs to be fixed but can be automatically learned as a linear combination of kernels.  ...  Hence, unlike the formulation considered in Section IV-A, here we can learn a sparse multiple-kernel classifier by simply alternating between LP and standard SVM training.  ... 
doi:10.1109/tnn.2009.2014229 pmid:19342346 fatcat:ptgk2gq3lrao5hmzc6dukzzkfi

Novel Fusion Methods for Pattern Recognition [chapter]

Muhammad Awais, Fei Yan, Krystian Mikolajczyk, Josef Kittler
2011 Lecture Notes in Computer Science  
Over the last few years, several approaches have been proposed for information fusion including different variants of classifier level fusion (ensemble methods), stacking and multiple kernel learning (  ...  In this paper we propose a multiclass extension of binary ν-LPBoost, which learns the contribution of each class in each feature channel.  ...  This research was supported by UK EPSRC EP/F0034 20/1, EP/F0694 21/1 and the BBC R&D grants.  ... 
doi:10.1007/978-3-642-23780-5_19 fatcat:3r2yudj2p5hvpj6qv5xrvbeoha

A Novel Multiple Kernel Learning Framework for Heterogeneous Feature Fusion and Variable Selection

Yi-Ren Yeh, Ting-Chu Lin, Yung-Yu Chung, Yu-Chiang Frank Wang
2012 IEEE transactions on multimedia  
We propose a novel multiple kernel learning (MKL) algorithm with a group lasso regularizer, called group lasso regularized MKL (GL-MKL), for heterogeneous feature fusion and variable selection.  ...  Adding a mixed 1,2 norm constraint (i.e., group lasso) as the regularizer, we can enforce the sparsity at the group/feature level and automatically learn a compact feature set for recognition purposes.  ...  ACKNOWLEDGEMENTS This work is supported in part by the National Science Council of Taiwan via NSC 99-2221-E-001-020 and NSC 100-2221-E-001-018-MY2.  ... 
doi:10.1109/tmm.2012.2188783 fatcat:l2iqmss3z5ex3dpo7ua27vofqy

Quantum Sparse Support Vector Machines [article]

Seyran Saeedi, Tom Arodz
2022 arXiv   pre-print
However, we prove that there are realistic scenarios in which a sparse linear classifier is expected to have high accuracy, and can be trained in sublinear time in terms of both the number of training  ...  We analyze the computational complexity of Quantum Sparse Support Vector Machine, a linear classifier that minimizes the hinge loss and the L_1 norm of the feature weights vector and relies on a quantum  ...  While the bound on the dual solution norm is a direct consequence of sparsity of the model, the i ξ i term in the primal solution norm in principle grows in proportion to the number of training samples  ... 
arXiv:1902.01879v4 fatcat:phqhimzcdjfghgc2o4zz5a7kci
« Previous Showing results 1 — 15 out of 440 results