43,732 Hits in 4.6 sec

Erratum to: A Method for Metric Learning with Multiple-Kernel Embedding

Xiao Lu, Yaonan Wang, Xuanyu Zhou, Zhigang Ling
2015 Neural Processing Letters  
spaces. c Our proposed formulation is the weighted version of b, the projections and the weights are jointly learned to produce the embedding space, a weighted combination, b concatenated projection, c  ...  Fig. 1 a The formulation in [23]: a data point x ∈ X is mapped into m feature spaces via φ 1 , φ 2 , . . . , φ M , which are then scaled by μ 1 , μ 2 , . . . , μ M to form a weighted feature space H *  ... 
doi:10.1007/s11063-015-9468-8 fatcat:ea3hxp2g5zdc5o7sjts6w2c6ta

Face recognition on large-scale video in the wild with hybrid Euclidean-and-Riemannian metric learning

Zhiwu Huang, Ruiping Wang, Shiguang Shan, Xilin Chen
2015 Pattern Recognition  
In this paper, we propose a novel Hybrid Euclidean-and-Riemannian Metric Learning (HERML) method to fuse multiple statistics of image set.  ...  With a LogDet divergence based objective function, the hybrid kernels are then fused by our hybrid metric learning framework, which can efficiently perform the fusing procedure on large-scale videos.  ...  ξ ij ; ði; jÞ A D Multiple kernel learning The Multiple Kernel Learning (MKL) refers to the process of learning a kernel machine with multiple kernel functions or kernel matrices.  ... 
doi:10.1016/j.patcog.2015.03.011 fatcat:sqr2cvioajdczbfmno3mnfonha

Hybrid Euclidean-and-Riemannian Metric Learning for Image Set Classification [chapter]

Zhiwu Huang, Ruiping Wang, Shiguang Shan, Xilin Chen
2015 Lecture Notes in Computer Science  
We propose a novel hybrid metric learning approach to combine multiple heterogenous statistics for robust image set classification.  ...  To fuse these statistics from heterogeneous spaces, we propose a Hybrid Euclidean-and-Riemannian Metric Learning (HERML) method to exploit both Euclidean and Riemannian metrics for embedding their original  ...  Riemmannian metrics for embedding these spaces into high dimension Hilbert spaces, and jointly learn corresponding metrics of multiple statistics for discriminant objective.  ... 
doi:10.1007/978-3-319-16811-1_37 fatcat:ihjhwmxo5rbglfwknfvbovj2jq

Deep Multiple Metric Learning for Time Series Classification

Zhi Chen, Yongguo Liu, Jiajing Zhu, Yun Zhang, Qiaoqin Li, Rongjiang Jin, Xia He
2021 IEEE Access  
In this paper, we propose a novel deep multiple metric learning (DMML) method for time series classification.  ...  However, most existing approaches focus on learning a single linear metric, which is unsuitable for nonlinear relationships and heterogeneous datasets with locality information.  ...  [40] formulated metric learning as a kernel regression problem. However, it is difficult and empirical to choose a proper kernel with flexible expression power.  ... 
doi:10.1109/access.2021.3053703 fatcat:erhjde4xmfbdjokuqmhrkmo5by

Stochastic Attraction-Repulsion Embedding for Large Scale Image Localization [article]

Liu Liu and Hongdong Li and Yuchao Dai
2019 arXiv   pre-print
For solving this problem, a critical task is to learn discriminative image representation that captures informative information relevant for localization.  ...  We propose a novel representation learning method having higher location-discriminating power.  ...  [4] 69.20 ---- Comparison with Metric-learning Methods Although deep metric-learning methods have shown their effectiveness in classification and fine-grain recognition tasks, their abilities in  ... 
arXiv:1808.08779v2 fatcat:orujatpac5f6vksk5r5z2vgutu

Compositional Prototype Network with Multi-view Comparision for Few-Shot Point Cloud Semantic Segmentation [article]

Xiaoyu Chen, Chi Zhang, Guosheng Lin, Jing Han
2020 arXiv   pre-print
Extensive experiments show that our method outperforms baselines with a significant advantage.  ...  Inspired by the few-shot learning literature in images, our network directly transfers label information from the limited training data to unlabeled test data for prediction.  ...  Deep Metric Learning Metric learning methods attempt to map data to an embedding space, where data with similar characteristics are close and dissimilar data are apart.  ... 
arXiv:2012.14255v1 fatcat:2q4ncjxdibfjlb3wr53bkaa7iq

Partial order embedding with multiple kernels

Brian McFee, Gert Lanckriet
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
We present an embedding algorithm based on semidefinite programming, which can be parameterized by multiple kernels to yield a unified space from heterogeneous features.  ...  We consider the problem of embedding arbitrary objects (e.g., images, audio, documents) into Euclidean space subject to a partial order over pairwise distances.  ...  Multiple kernels For heterogeneous data, we learn a unified space from multiple kernels, each of which may code for different features.  ... 
doi:10.1145/1553374.1553467 dblp:conf/icml/McFeeL09 fatcat:6kap46fltvc6viktjwppyn2pcu

Deep Metric Learning and Image Classification with Nearest Neighbour Gaussian Kernels [article]

Benjamin J. Meyer, Ben Harwood, Tom Drummond
2018 arXiv   pre-print
We present a Gaussian kernel loss function and training algorithm for convolutional neural networks that can be directly applied to both distance metric learning and image classification problems.  ...  Our method treats all training features from a deep neural network as Gaussian kernel centres and computes loss by summing the influence of a feature's nearby centres in the feature embedding space.  ...  The best success on embedding learning tasks has been achieved by deep metric learning methods [10, 11, 12, 2, 13] , which make use of deep neural networks.  ... 
arXiv:1705.09780v3 fatcat:2hyi2uixqjawxm6clotfzankkq

Deep Graph Similarity Learning: A Survey [article]

Guixiang Ma, Nesreen K. Ahmed, Theodore L. Willke, Philip S. Yu
2020 arXiv   pre-print
Here, we provide a comprehensive review of the existing literature of deep graph similarity learning. We propose a systematic taxonomy for the methods and applications.  ...  In many domains where data are represented as graphs, learning a similarity metric among graphs is considered a key problem, which can further facilitate various learning tasks, such as classification,  ...  The graphs are decomposed into a number of primary walk groups with different walk lengths, and a generalized multiple kernel learning algorithm is applied to combine all the context-dependent graph kernels  ... 
arXiv:1912.11615v2 fatcat:mz6zq2wuhvhulcz67sgilmczoy

Unsupervised Inductive Graph-Level Representation Learning via Graph-Graph Proximity [article]

Yunsheng Bai, Hao Ding, Yang Qiao, Agustin Marinovic, Ken Gu, Ting Chen, Yizhou Sun, Wei Wang
2019 arXiv   pre-print
We introduce a novel approach to graph-level representation learning, which is to embed an entire graph into a vector space where the embeddings of two graphs preserve their graph-graph proximity.  ...  The learned neural network can be considered as a function that receives any graph as input, either seen or unseen in the training set, and transforms it into an embedding.  ...  Compared with graph kernels, UGRAPHEMB learns a function that preserves a general graph similarity/distance metric such as Graph Edit Distance (GED), and as a result, yields a graph-level embedding for  ... 
arXiv:1904.01098v2 fatcat:atdqtznunbbo3p6xvywnucsun4

Heterogeneous Relational Kernel Learning [article]

Andre T. Nguyen, Edward Raff
2019 arXiv   pre-print
We show the practical utility of our method by leveraging the learned embeddings for clustering, pattern discovery, and anomaly detection.  ...  We extend prior work to create an interpretable kernel embedding for heterogeneous time series.  ...  ACKNOWLEDGEMENTS Special thanks to Drew Farris for his support of this work and to Eli N. Weinstein for interesting conversations and insights on connections to model based search.  ... 
arXiv:1908.09219v1 fatcat:n5mpga6iybbitga54wvt5k6jhy

Neural Decoding with Kernel-Based Metric Learning

Austin J. Brockmeier, John S. Choi, Evan G. Kriminger, Joseph T. Francis, Jose C. Principe
2014 Neural Computation  
While a number of metrics for individual neurons exist, a method to optimally combine single-neuron metrics into multineuron, or populationbased, metrics is lacking.  ...  We pose the problem of optimizing multineuron metrics and other metrics using centered alignment, a kernel-based dependence measure.  ...  We thank the reviewers for suggestions that have improved this work.  ... 
doi:10.1162/neco_a_00591 pmid:24684447 fatcat:b7h4xu2e2jbu5cr6zfgzafu74i

Deep graph similarity learning: a survey

Guixiang Ma, Nesreen K. Ahmed, Theodore L. Willke, Philip S. Yu
2021 Data mining and knowledge discovery  
Here, we provide a comprehensive review of the existing literature of deep graph similarity learning. We propose a systematic taxonomy for the methods and applications.  ...  AbstractIn many domains where data are represented as graphs, learning a similarity metric among graphs is considered a key problem, which can further facilitate various learning tasks, such as classification  ...  To view a copy of this licence, visit  ... 
doi:10.1007/s10618-020-00733-5 fatcat:ip5p6rrgxva4fpn7nwifp3hkza

Learning Multi-modal Similarity [article]

Brian McFee, Gert Lanckriet
2010 arXiv   pre-print
We present a novel multiple kernel learning technique for integrating heterogeneous data into a single, unified similarity space.  ...  Our algorithm learns an optimal ensemble of kernel transfor- mations which conform to measurements of human perceptual similarity, as expressed by relative comparisons.  ...  Note that the poor performance of MFCC and AT kernels may be expected, as they derive from song-level rather than Table 4 : aset400 embedding results with multiple kernel learning: the learned metrics  ... 
arXiv:1008.5163v1 fatcat:demnetxmlfadtit6mxbsxfsniy

Wasserstein Weisfeiler-Lehman Graph Kernels [article]

Matteo Togninalli, Elisabetta Ghisu, Felipe Llinares-López, Bastian Rieck, Karsten Borgwardt
2019 arXiv   pre-print
We further propose a Weisfeiler-Lehman inspired embedding scheme for graphs with continuous node attributes and weighted edges, enhance it with the computed Wasserstein distance, and thus improve the state-of-the-art  ...  We propose a novel method that relies on the Wasserstein distance between the node feature vector distributions of two graphs, which allows to find subtler differences in data sets by considering graphs  ...  On a more general level, our proposed method provides a solid foundation of the use of optimal transport theory for kernel methods and highlights the large potential of optimal transport for machine learning  ... 
arXiv:1906.01277v2 fatcat:7ungpzoserec3lsw64up64sori
« Previous Showing results 1 — 15 out of 43,732 results