Filters








44,446 Hits in 4.4 sec

Random Feature Maps for Dot Product Kernels [article]

Purushottam Kar, Harish Karnick
2012 arXiv   pre-print
We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional Euclidean spaces in which  ...  We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces.  ...  K. thanks Prateek Jain and Manik Varma for useful discussions and Devanshu Bhimwal for pointing out an error in Lemma 10. P.  ... 
arXiv:1201.6530v3 fatcat:xdpgqlyugvhs3py64apzdvxzmu

Sparse random feature maps for the item-multiset kernel

Kyohei Atarashi, Satoshi Oyama, Masahito Kurihara
2021 Neural Networks  
Unfortunately, random feature maps for the itemset kernel and dot product kernels cannot approximate the item-multiset kernel.  ...  Random feature maps are a promising tool for large-scale kernel methods.  ...  Many types of random feature maps have been proposed: random Fourier feature maps for shift-invariant kernels [28] , random Maclaurin (RM) feature maps for dot product kernels [17] , tensor sketching  ... 
doi:10.1016/j.neunet.2021.06.024 pmid:34280609 fatcat:5ywjcbwukbgxbbyjgn3ysi35uy

Random Feature Maps for the Itemset Kernel

Kyohei Atarashi, Subhransu Maji, Satoshi Oyama
2019 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We present random feature maps for the itemset kernel, which uses feature combinations, and includes the ANOVA kernel, the all-subsets kernel, and the standard dot product.  ...  We also present theoretical results for a proposed map, discuss the relationship between factorization machines and linear models using a proposed map for the ANOVA kernel, and relate the proposed feature  ...  Acknowledgement This work was supported by Global Station for Big Data and Cybersecurity, a project of Global Institution for Collaborative Research and Education at Hokkaido University.  ... 
doi:10.1609/aaai.v33i01.33013199 fatcat:4yhvhnomrjcqdia4ya73ut3cn4

Data driven frequency mapping for computationally scalable object detection

Fatih Porikli, Huseyin Ozkan
2011 2011 8th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS)  
To accelerate the training and particularly testing of such nonlinear kernel machines, we map the input data onto a low-dimensional spectral (Fourier) feature space using a cosine transform, design a kernel  ...  Nonlinear kernel Support Vector Machines achieve better generalizations, yet their training and evaluation speeds are prohibitively slow for real-time object detection tasks where the number of data points  ...  An explicit mapping the data to a low-dimensional Euclidean inner product space using a randomized feature map for approximating shift invariant kernels is also proposed in [11] .  ... 
doi:10.1109/avss.2011.6027289 dblp:conf/avss/PorikliO11 fatcat:235e3u2eingsbou6m4jgbggc34

Compact Random Feature Maps [article]

Raffay Hamid and Ying Xiao and Alex Gittens and Dennis DeCoste
2013 arXiv   pre-print
Kernel approximation using randomized feature maps has recently gained a lot of interest.  ...  To address this challenge, we propose compact random feature maps (CRAFTMaps) to approximate polynomial kernels more concisely and accurately.  ...  While randomized feature maps are applicable to approximate the more general class of dot-product kernels, in this work we focus on analyzing polynomial kernels, where K(x, y) is of the form ( x, y +q)  ... 
arXiv:1312.4626v1 fatcat:iirwzpbnj5b5bncejzr76cxjaa

No-Trick (Treat) Kernel Adaptive Filtering using Deterministic Features [article]

Kan Li, Jose C. Principe
2019 arXiv   pre-print
its dot product approximates the kernel function.  ...  In this paper, we view the dot product of these explicit mappings not as an approximation, but as an equivalent positive-definite kernel that induces a new finite-dimensional reproducing kernel Hilbert  ...  A popular approach to handling this problem, known as random Fourier features, samples from a distribution to obtain the basis of a lowerdimensional space, where its dot product approximates the kernel  ... 
arXiv:1912.04530v1 fatcat:p6qo5tv7sjfzfjkmqqmizu4g3q

Random Feature Maps via a Layered Random Projection (LaRP) Framework for Object Classification [article]

A. G. Chung, M. J. Shafiee, A. Wong
2016 arXiv   pre-print
The approximation of nonlinear kernels via linear feature maps has recently gained interest due to their applications in reducing the training and testing time of kernel-based learning algorithms.  ...  We introduce a Layered Random Projection (LaRP) framework, where we model the linear kernels and nonlinearity separately for increased training efficiency.  ...  Inspired by [7] , Kar and Karnick [8] presented feature maps approximating positive definite dot product kernels via the low-distortion embeddings of dot product kernels into linear Euclidean spaces  ... 
arXiv:1602.01818v1 fatcat:ttent3p4o5f67i24akcued4h5e

Improved Random Features for Dot Product Kernels [article]

Jonas Wacker, Motonobu Kanagawa, Maurizio Filippone
2022 arXiv   pre-print
We make several novel contributions for improving the efficiency of random feature approximations for dot product kernels, to make these kernels more useful in large scale learning.  ...  Third, by using these variance formulas, which can be evaluated in practice, we develop a data-driven optimization approach to improve random feature approximations for general dot product kernels, which  ...  Conclusion We made several contributions for understanding and improving random feature approximations for dot product kernels.  ... 
arXiv:2201.08712v2 fatcat:gf73u6jowzhvbld3gbmv4ynqdq

A Unifying View of Explicit and Implicit Feature Maps for Structured Data: Systematic Studies of Graph Kernels [article]

Nils M. Kriege, Marion Neumann, Christopher Morris, Kristian Kersting, Petra Mutzel
2017 arXiv   pre-print
In particular, we derive feature maps for random walk and subgraph matching kernels and apply them to real-world graphs with discrete labels.  ...  To this end, explicit feature maps of kernels for vectorial data have been extensively studied. As many real-world data is structured, various kernels for complex data like graphs have been proposed.  ...  DOTPRODUCT for computing the dot product between two feature vectors.  ... 
arXiv:1703.00676v2 fatcat:3pruya7lgjfsplj4bu4ft4zmei

Isolation Kernel: The X Factor in Efficient and Effective Large Scale Online Kernel Learning [article]

Kai Ming Ting and Jonathan R. Wells and Takashi Washio
2019 arXiv   pre-print
It focuses on creating an exact, sparse and finite-dimensional feature map of a kernel called Isolation Kernel.  ...  A current key approach focuses on ways to produce an approximate finite-dimensional feature map, assuming that the kernel used has a feature map with intractable dimensionality---an assumption traditionally  ...  For large ψ, this dot product could result in orders of magnitude faster than using the naive dot product (see Figure 4 in Section 8.1.2 later).  ... 
arXiv:1907.01104v2 fatcat:g5j2s3cv6bgflm5oqp5wfcloei

Random Gegenbauer Features for Scalable Kernel Methods [article]

Insu Han, Amir Zandieh, Haim Avron
2022 arXiv   pre-print
We propose efficient random features for approximating a new and rich class of kernel functions that we refer to as Generalized Zonal Kernels (GZK).  ...  kernel functions such as the entirety of dot-product kernels as well as the Gaussian and the recently introduced Neural Tangent kernels.  ...  Now we present a feature map for the GZK which will be the basis of our efficient random features. Lemma 5 (Feature map for GZK).  ... 
arXiv:2202.03474v1 fatcat:4kxnd3ju3vbjvexlv4w6xxqsiu

A unifying view of explicit and implicit feature maps of graph kernels

Nils M. Kriege, Marion Neumann, Christopher Morris, Kristian Kersting, Petra Mutzel
2019 Data mining and knowledge discovery  
Moreover, we propose and analyze algorithms for computing random walk, shortest-path and subgraph matching kernels by explicit and implicit feature maps.  ...  On this basis we propose exact and approximative feature maps for widely used graph kernels based on the kernel trick.  ...  In addition, we analyze the ratio between the time T φ for computing the explicit mapping and T dot for taking dot products.  ... 
doi:10.1007/s10618-019-00652-0 fatcat:fs2ufrjgtbcqlpe7kv2n4mh7h4

Efficient Approximation for Large-Scale Kernel Clustering Analysis

Keng-Pei Lin, Yu-Chen Yang
2014 Pacific Asia Conference on Information Systems  
Kernel k-means is useful for performing clustering on nonlinearly separable data. The kernel k-means is hard to scale to large data due to the quadratic complexity.  ...  In this paper, we propose an approach which utilizes the low-dimensional feature approximation of the Gaussian kernel function to capitalize a fast linear k-means solver to perform the nonlinear kernel  ...  The kernel function k(x i , x j ) implicitly maps x i and x j into a high-dimensional feature space and computes their dot product there.  ... 
dblp:conf/pacis/LinY14 fatcat:ezdxjof6uvexrd4oi4amxwrnwy

Sampling Techniques for Kernel Methods

Dimitris Achlioptas, Frank McSherry, Bernhard Schölkopf
2001 Neural Information Processing Systems  
We propose randomized techniques for speeding up Kernel Principal Component Analysis on three levels: sampling and quantization of the Gram matrix in training, randomized rounding in evaluating the kernel  ...  expansions, and random projections in evaluating the kernel itself.  ...  BS would like to thank Santosh Venkatesh for detailed discussions on sampling kernel expansions.  ... 
dblp:conf/nips/AchlioptasMS01 fatcat:vl5vwgcfqvemrhxthw5bsjb6tm

On Kernel Discriminant Analyses Applied to Phoneme Classification [chapter]

András Kocsor
2005 Lecture Notes in Computer Science  
In this paper we recall two kernel methods for discriminant analysis.  ...  The first one is the kernel counterpart of the ubiquitous Linear Discriminant Analysis (Kernel-LDA), while the second one is a method we named Kernel Springy Discriminant Analysis (Kernel-SDA).  ...  Now let the dot product be implicitly defined by the kernel function κ : X ×X → R in some finite or infinite dimensional dot product space F with associated mapping φ : X → F such that ∀x, z ∈ X κ(x, z  ... 
doi:10.1007/11427445_58 fatcat:eu6qfehmpfdsnnujof72mzwdwu
« Previous Showing results 1 — 15 out of 44,446 results