13,676 Hits in 8.7 sec

Generalization Guarantees for a Binary Classification Framework for Two-Stage Multiple Kernel Learning [article]

Purushottam Kar
2013 arXiv   pre-print
We present generalization bounds for the TS-MKL framework for two stage multiple kernel learning. We also present bounds for sparse kernel learning formulations within the TS-MKL framework.  ...  Introduction Recently Kumar et al [6] proposed a framework for two-stage multiple kernel learning that combines the idea of target kernel alignment and the notion of a good kernel proposed in [1] to  ...  problem as the following optimization problem: min µ≥0 λ 2 µ 2 2 +R(µ) Generalization Guarantees for a Learned Kernel Combination Our generalization guarantee shall proceed in two steps.  ... 
arXiv:1302.0406v1 fatcat:qnpcyv2go5f5rkvgw6jjyfxinq

A Binary Classification Framework for Two-Stage Multiple Kernel Learning [article]

Abhishek Kumar, Alexandru Niculescu-Mizil, Hal Daume III
2012 arXiv   pre-print
In this paper we show that Multiple Kernel Learning can be framed as a standard binary classification problem with additional constraints that ensure the positive definiteness of the learned kernel.  ...  In this context, the Multiple Kernel Learning (MKL) problem of finding a combination of pre-specified base kernels that is suitable for the task at hand has received significant attention from researchers  ...  In this paper we introduce TS-MKL, a general approach to Two-Stage Multiple Kernel Learning that encompasses the previous work based on target alignment as special cases.  ... 
arXiv:1206.6428v1 fatcat:pk2lgqnjereqzcmf5gr43dm6ve

Multi-Task Multiple Kernel Relationship Learning [article]

Keerthiram Murugesan, Jaime Carbonell
2017 arXiv   pre-print
This paper presents a novel multitask multiple kernel learning framework that efficiently learns the kernel weights leveraging the relationship across multiple tasks.  ...  In order to tackle large-scale problems, we further propose a two-stage MK-MTRL online learning algorithm and show that it significantly reduces the computational time, and also achieves performance comparable  ...  We propose an efficient binary classification framework for learning the weights of these task-specific base kernels, based on target alignment [6] .  ... 
arXiv:1611.03427v2 fatcat:gpiorygjfjdd7dor7xf4zyamsy

Domain-Adversarial Multi-Task Framework for Novel Therapeutic Property Prediction of Compounds [article]

Lingwei Xie, Song He, Shu Yang, Boyuan Feng, Kun Wan, Zhongnan Zhang, Xiaochen Bo, Yufei Ding
2018 arXiv   pre-print
In this paper, we propose a novel domain-adversarial multi-task framework for integrating shared knowledge from multiple domains.  ...  The framework utilizes the adversarial strategy to effectively learn target representations and models their nonlinear dependency.  ...  Multi-Class Classification Figure 2 : The training for the whole framework consists of two stages.  ... 
arXiv:1810.00867v1 fatcat:ysjmsdkndfe7rapwbznwngvxgm

RBCN: Rectified Binary Convolutional Networks for Enhancing the Performance of 1-bit DCNNs [article]

Chunlei Liu and Wenrui Ding and Xin Xia and Yuan Hu and Baochang Zhang and Jianzhuang Liu and Bohan Zhuang and Guodong Guo
2019 arXiv   pre-print
In this paper, we propose rectified binary convolutional networks (RBCNs), towards optimized BCNNs, by combining full-precision kernels and feature maps to rectify the binarization process in a unified  ...  framework.  ...  ., 2019] learns a set of diverse quantized kernels by exploiting multiple projections with discrete back propagation.  ... 
arXiv:1908.07748v2 fatcat:g2qkaqoeibhu7pw6q46sbs7eje

Multi-Task Multiple Kernel Relationship Learning [chapter]

Keerthiram Murugesan, Jaime Carbonell
2017 Proceedings of the 2017 SIAM International Conference on Data Mining  
This paper presents a novel multitask multiple kernel learning framework that efficiently learns the kernel weights leveraging the relationship across multiple tasks.  ...  In order to tackle large-scale problems, we further propose a two-stage MK-MTRL online learning algorithm and show that it significantly reduces the computational time, and also achieves performance comparable  ...  Acknowledgements We thank the anonymous reviewers for their helpful comments.  ... 
doi:10.1137/1.9781611974973.77 dblp:conf/sdm/MurugesanC17 fatcat:ctdfk5sifvdkbfj3wnnhkme3di

Reduction from Cost-Sensitive Ordinal Ranking to Weighted Binary Classification

Hsuan-Tien Lin, Ling Li
2012 Neural Computation  
bounds for binary classification.  ...  We present a reduction framework from ordinal ranking to binary classification.  ...  Abu-Mostafa, Amrit Pratap, John Langford and the anonymous reviewers for valuable discussions and comments.  ... 
doi:10.1162/neco_a_00265 pmid:22295981 fatcat:xdkdxpkhujh55cpnfgmi6rmdvm

Pebl:web page classification without negative examples

Hwanjo Yu, Jiawei Han, K.C. Chang
2004 IEEE Transactions on Knowledge and Data Engineering  
This paper presents a framework, called Positive Example Based Learning (PEBL), for Web page classification which eliminates the need for manually collecting negative training examples in preprocessing  ...  M-C runs in two stages: the mapping stage and convergence stage. In the mapping stage, the algorithm uses a weak classifier that draws an initial approximation of "strong" negative data.  ...  Fig. 1 shows the difference between a typical learning framework and the PEBL framework for Web page classification.  ... 
doi:10.1109/tkde.2004.1264823 fatcat:qple4ngrjzavhagxe7hb6u3mhq

Scalable Discrete Supervised Hash Learning with Asymmetric Matrix Factorization [article]

Shifeng Zhang, Jianmin Li, Jinma Guo, Bo Zhang
2016 arXiv   pre-print
First, the discrete learning procedure is decomposed into a binary classifier learning scheme and binary codes learning scheme, which makes the learning procedure more efficient.  ...  The proposed framework also provides a flexible paradigm to incorporate with arbitrary hash function, including deep neural networks and kernel methods.  ...  For example, kernel SVM corresponds to a kernel-based hash function. A binary classifier with high classification accuracy corresponds to a good hash function.  ... 
arXiv:1609.08740v1 fatcat:dpbs4eayefarxpqwjxkaxiiaf4

Coupled Multiple Kernel Learning for Supervised Classification

En Zhu, Qiang Liu, Jianping Yin
2017 Computing and informatics  
In this paper, we propose a coupled multiple kernel learning method for supervised classification (CMKL-C), which comprehensively involves the intra-coupling within each kernel, inter-coupling among different  ...  Multiple kernel learning (MKL) has recently received significant attention due to the fact that it is able to automatically fuse information embedded in multiple base kernels and then find a new kernel  ...  learning was to learn complex couplings and heterogeneity, which laid a solid theoretical framework for the non-IIDness learning study.  ... 
doi:10.4149/cai_2017_3_618 fatcat:4haxlkzm6jhrhccwj6jtmftl34

Transfer Deep Learning Along with Binary Support Vector Machine for Abnormal Behavior Detection

Ahlam Al-Dhamari, Rubita Sudirman, Nasrul Humaimi Mahmood
2020 IEEE Access  
Next, the feature vector is passed into Binary Support Vector Machine classifier (BSVM) to construct a binary-SVM model.  ...  Today, machine learning and deep learning have paved the way for vital and critical applications such as abnormal detection.  ...  To perform the binary classification, BSVM with linear kernel was utilized.  ... 
doi:10.1109/access.2020.2982906 fatcat:gz7rlf37fjcmjh5nbo6obfmhx4

Stable Learning in Coding Space for Multi-class Decoding and Its Extension for Multi-class Hypothesis Transfer Learning

Bang Zhang, Yi Wang, Yang Wang, Fang Chen
2014 2014 IEEE Conference on Computer Vision and Pattern Recognition  
Many prevalent multi-class classification approaches can be unified and generalized by the output coding framework [1, 7] which usually consists of three phases: (1) coding, (2) learning binary classifiers  ...  Most of these approaches focus on the first two phases and predefined distance function is used for decoding.  ...  Source domain hypotheses are exploited for leveraging target domain training via an alternative optimization process between coding space learning and target domain training.  ... 
doi:10.1109/cvpr.2014.141 dblp:conf/cvpr/ZhangWWC14 fatcat:3etiyv3oanenppmizai43akmgq

New Deep Kernel Learning based Models for Image Classification

Rabha O., Yasser F.Hassan, Mohamed W.Saleh
2017 International Journal of Advanced Computer Science and Applications  
In this paper, three different models with different deep multiple kernel learning architectures are proposed and evaluated for the breast cancer classification problem.  ...  For image classification purpose, support vector machine with the proposed deep multiple kernel models are used.  ...  A general framework for MLMKL is proposed in [10] . The authors had some problems in network optimization beyond two layers.  ... 
doi:10.14569/ijacsa.2017.080755 fatcat:lmtqjn4hzfcslaqjh73qefkgia

Multi Kernel Learning with Online-Batch Optimization

Francesco Orabona, Jie Luo, Barbara Caputo
2012 Journal of machine learning research  
In the domain of kernel methods, a principled way to use multiple features is the Multi Kernel Learning (MKL) approach.  ...  In recent years there has been a lot of interest in designing principled classification algorithms over multiple cues, based on the intuitive notion that using more features should lead to better performance  ...  Acknowledgments The kernel matrixes of Caltech-101 were kindly provided by Peter Gehler, who we also thank for his useful comments. This work was sponsored by the EU project DIRAC IST-027787.  ... 
dblp:journals/jmlr/OrabonaLC12 fatcat:5ub67eme4jewxjxrluhigsaa6y

Memory and Computation-Efficient Kernel SVM via Binary Embedding and Ternary Model Coefficients

Zijian Lei, Liang Lan
Second, we propose a simple but effective algorithm to learn a linear classification model with binary coefficients which can support different types of loss function and regularizer.  ...  Our algorithm can achieve better generalization accuracy than existing works on learning binary coefficients since we allow coefficient to be -1, 0 or 1 during the training stage and coefficient 0 can  ...  Acknowledgments We would like to thank the anonymous reviewers for their insightful comments and valuable suggestions on our paper. This work was supported by NSFC 61906161.  ... 
doi:10.1609/aaai.v35i9.17011 fatcat:q5stxi6bifdknnp5pzul2wshqu
« Previous Showing results 1 — 15 out of 13,676 results