Filters








1,308 Hits in 5.2 sec

A kernelized maximal-figure-of-merit learning approach based on subspace distance minimization

Byungki Byun, Chin-Hui Lee
2011 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
We propose a kernelized maximal-figure-of-merit (MFoM) learning approach to efficiently training a nonlinear model using subspace distance minimization.  ...  We show that the subspace distance can be minimized through the Nyström extension.  ...  To tackle this issue, thus, we propose a kernelized MFoM learning approach based on a subspace distance minimization criterion.  ... 
doi:10.1109/icassp.2011.5946732 dblp:conf/icassp/ByunL11 fatcat:4tt2iyud6fcyvkbwiwt37w33re

Anti-drift in electronic nose via dimensionality reduction: a discriminative subspace projection approach [article]

Zhengkun Yi, Cheng Li
2018 arXiv   pre-print
Experiments on two sensor drift datasets have shown the effectiveness of the proposed approach.  ...  The proposed method inherits the merits of the subspace projection method called domain regularized component analysis.  ...  Zhang for his clear and helpful explanation on the DCRA method.  ... 
arXiv:1901.02321v1 fatcat:nxwuyf7f45ew3n324ftfrcyjji

Anti-Drift in Electronic Nose via Dimensionality Reduction: A Discriminative Subspace Projection Approach

Zhengkun Yi, Cheng Li
2019 IEEE Access  
The proposed method has multiple properties. (1) It inherits the merits of the subspace projection approach called domain regularized component analysis via introducing a regularization parameter to tackle  ...  Experiments on two sensor drift datasets have shown the effectiveness of the proposed approach.  ...  Zhang for his clear and helpful explanation on the DRCA method.  ... 
doi:10.1109/access.2019.2955712 fatcat:zslr7pocyrdppexhlyfaeyhv44

Discriminant analysis on Riemannian manifold of Gaussian distributions for face recognition with image sets

Wen Wang, Ruiping Wang, Zhiwu Huang, Shiguang Shan, Xilin Chen
2015 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
To encode such Riemannian geometry properly, we investigate several distances between Gaussians and further derive a series of provably positive definite probabilistic kernels.  ...  In the light of information geometry, the Gaussians lie on a specific Riemannian manifold.  ...  Figure 1 : 1 Conceptual illustration of the proposed approach. (a) Training image sets in the gallery.  ... 
doi:10.1109/cvpr.2015.7298816 dblp:conf/cvpr/WangWHSC15 fatcat:xw7anblsjra6hnw6vi4ks6vwje

Discriminant Analysis on Riemannian Manifold of Gaussian Distributions for Face Recognition with Image Sets

Wen Wang, Ruiping Wang, Zhiwu Huang, Shiguang Shan, Xilin Chen
2017 IEEE Transactions on Image Processing  
To encode such Riemannian geometry properly, we investigate several distances between Gaussians and further derive a series of provably positive definite probabilistic kernels.  ...  In the light of information geometry, the Gaussians lie on a specific Riemannian manifold.  ...  Figure 1 : 1 Conceptual illustration of the proposed approach. (a) Training image sets in the gallery.  ... 
doi:10.1109/tip.2017.2746993 pmid:28866497 fatcat:h4nu5sufprg4xlii2kga3xdj4u

A Survey on Multi-view Learning [article]

Chang Xu, Dacheng Tao, Chao Xu
2013 arXiv   pre-print
Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond  ...  to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming  ...  Subspace Learning-based Approaches Subspace learning-based approaches aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this subspace.  ... 
arXiv:1304.5634v1 fatcat:nnux76pyobdzhovzlcywxrzkty

Motor Imagery Classification via Kernel-Based Domain Adaptation on an SPD Manifold

Qin Jiang, Yi Zhang, Kai Zheng
2022 Brain Sciences  
In KMDA, the covariance matrices are aligned in the Riemannian manifold, and then are mapped to a high dimensional space by a log-Euclidean metric Gaussian kernel, where subspace learning is performed  ...  Methods: In this paper, we propose a novel domain adaptation framework, referred to as kernel-based Riemannian manifold domain adaptation (KMDA).  ...  Conflicts of Interest: The authors declare no conflict of interest regarding the publication of this article.  ... 
doi:10.3390/brainsci12050659 fatcat:uwrzer2fuzdjtebiijjsam4jbu

Manifold elastic net for sparse learning

Tianyi Zhou, Dacheng Tao
2009 2009 IEEE International Conference on Systems, Man and Cybernetics  
MEN combines merits of the manifold regularization and the elastic net regularization, so it considers both the nonlinear manifold structure of a dataset and the sparse property of the redundant data representation  ...  Most of existing works apply the appearance based information for data representation. A face image with size 40 by 40 could be seen as a point in a linear space with 1600 dimensions.  ...  Dimension on FERET and UMIST Figure 3 . 3 Recognition rate vs. number of training samples on FERET Figure 4 . 4 Recognition rate vs. number of training samples on Figure 6 .Figure 8 . 68 5 bases  ... 
doi:10.1109/icsmc.2009.5346879 dblp:conf/smc/ZhouT09 fatcat:x6a2h363z5fcpe7pvgjtelo7zi

Scalable Outlying-Inlying Aspects Discovery via Feature Ranking [chapter]

Nguyen Xuan Vinh, Jeffrey Chan, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao, Jian Pei
2015 Lecture Notes in Computer Science  
Second, we present OARank -a hybrid framework that leverages the efficiency of feature selection based approaches and the effectiveness and versatility of score-and-search based methods.  ...  Our proposed approach is orders of magnitudes faster than previously proposed score-and-search based approaches while being slightly more effective, making it suitable for mining large data sets.  ...  The average Jaccard index and precision over all outliers for different approaches on all datasets are reported in Figure 3(a,b) .  ... 
doi:10.1007/978-3-319-18032-8_33 fatcat:kwktelktdzh2zgmxdxnfqfj3de

Face Subspace Learning [chapter]

Wei Bian, Dacheng Tao
2011 Handbook of Face Recognition  
The last few decades have witnessed a great success of subspace learning for face recognition.  ...  Mathematically, PCA maximizes the variance in the projected subspace for a given dimensionality, decorrelates the training face images in the projected subspace, and maximizes the mutual information between  ...  Li for insightful discussions on nearest feature line.  ... 
doi:10.1007/978-0-85729-932-1_3 fatcat:ot7fkakworamtavm4jwlp4sfjm

Kernel-based distance metric learning in the output space

Cong Li, Michael Georgiopoulos, Georgios C. Anagnostopoulos
2013 The 2013 International Joint Conference on Neural Networks (IJCNN)  
In this paper we present two related, kernel-based Distance Metric Learning (DML) methods.  ...  Experimental results for a collection of classification tasks illustrate the advantages of the proposed methods over other traditional and kernel-based DML approaches.  ...  Eventually, one can simultaneously learn the output space distance metric and the mapping f through a joint minimization.  ... 
doi:10.1109/ijcnn.2013.6706862 dblp:conf/ijcnn/LiGA13 fatcat:jz7kksq35vdpzbfvor4ch2dcnu

Margin Based Semi-Supervised Elastic Embedding for Face Image Analysis

F. Dornaika, Y. El Traboulsi
2017 2017 IEEE International Conference on Computer Vision Workshops (ICCVW)  
Unlike many state-of-the art non-linear embedding approaches which suffer from the out-of-sample problem, our proposed methods have a direct out-of-sample extension to novel samples.  ...  are based on label propagation or graph-based semi-supervised embedding.  ...  In [10] , the authors propose a joint learning of labels and distance metric approach, which is able to optimize the labels of unlabeled samples and a Mahalanobis distance metric in a unified scheme.  ... 
doi:10.1109/iccvw.2017.156 dblp:conf/iccvw/DornaikaT17 fatcat:pbmr2pz5erd27f5skd2xhw2c3e

Localizing volumetric motion for action recognition in realistic videos

Xiao Wu, Chong-Wah Ngo, Jintao Li, Yongdong Zhang
2009 Proceedings of the seventeen ACM international conference on Multimedia - MM '09  
Experiments on a realistic Hollywood movie dataset show that the proposed approach can achieve 20% relative improvement compared to the state-ofthe-art STIP based algorithm.  ...  Previous works mainly focus on learning from descriptors of cuboids around space time interest points (STIP) to characterize actions.  ...  Figure 3 : 3 VOI extraction and description, (a) low dimensional embedding of trajectories, (b) determination of spatiotemporal boundary of 3D VOI by using the minimal and maximal coordinates of trajectories  ... 
doi:10.1145/1631272.1631342 dblp:conf/mm/WuNLZ09 fatcat:fxnhowv4urbwhc3vlqkhuuhnwq

Supervised Kernel Optimized Locality Preserving Projection with Its Application to Face Recognition and Palm Biometrics

Chuang Lin, Jifeng Jiang, Xuefeng Zhao, Meng Pang, Yanchun Ma
2015 Mathematical Problems in Engineering  
In order to overcome this limitation, a method named supervised kernel optimized LPP (SKOLPP) is proposed in this paper, which can maximize the class separability in kernel learning.  ...  However, the conventional SKLPP algorithm endures the kernel selection which has significant impact on the performances of SKLPP.  ...  In SKOLPP, we first construct a data-dependent kernel [18] to maximize the class separability based Fisher Criterion.  ... 
doi:10.1155/2015/421671 fatcat:htlmlqak6rbhjfhw4etaku75rm

FISH-MML: Fisher-HSIC Multi-View Metric Learning

Changqing Zhang, Yeqinq Liu, Yue Liu, Qinghua Hu, Xinwang Liu, Pengfei Zhu
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
In our approach, the class separability is enforced in the spirit of FDA within each single view, while the consistence among different views is enhanced based on HSIC.  ...  learning method based on Fisher discriminant analysis (FDA) and Hilbert-Schmidt Independence Criteria (HSIC), termed as Fisher-HSIC Multi-View Metric Learning (FISH-MML).  ...  Acknowledgments This work was supported in part by National Natural Science Foundation of China (Grand No:61602337, 61732011, 61702358).  ... 
doi:10.24963/ijcai.2018/424 dblp:conf/ijcai/ZhangLLHLZ18 fatcat:s7byv55zuncytaud2uwz33fsfa
« Previous Showing results 1 — 15 out of 1,308 results