Filters








32,331 Hits in 5.6 sec

Kernelization Using Structural Parameters on Sparse Graph Classes [article]

Jakub Gajarský, Petr Hliněný, Jan Obdržálek, Sebastian Ordyniak, Felix Reidl, Peter Rossmanith, Fernando Sánchez Villaamil, Somnath Sikdar
2015 arXiv   pre-print
To the best of our knowledge, no meta-theorems for polynomial kernels are known for any larger sparse graph classes; e.g., for classes of bounded expansion or for nowhere dense ones.  ...  While our parameter may seem rather strong, we argue that a linear kernelization result on graphs of bounded expansion with a weaker parameter (than treedepth modulator) would fail to include some of the  ...  This view replaces the natural problem parameter-whose structural impact diminishes in larger sparse graph classes-by an explicit structural parameter which retains the crucial interaction between parameter  ... 
arXiv:1302.6863v3 fatcat:vctxcjvj55hkhmplzxawjj4d5q

Kernelization using structural parameters on sparse graph classes

Jakub Gajarský, Petr Hliněný, Jan Obdržálek, Sebastian Ordyniak, Felix Reidl, Peter Rossmanith, Fernando Sánchez Villaamil, Somnath Sikdar
2017 Journal of computer and system sciences (Print)  
For nowhere dense graph classes, our result yields almost-linear kernels.  ...  We prove that graph problems with nite integer index have linear kernels on graphs of bounded expansion when parameterized by the size of a modulator to constant-treedepth graphs.  ...  To put this parameterization into context, let us recap some previous work on structural parameters.  ... 
doi:10.1016/j.jcss.2016.09.002 fatcat:4fcere7danhlth7bjvzv2agx7u

Kernelization Using Structural Parameters on Sparse Graph Classes [chapter]

Jakub Gajarský, Petr Hliněný, Jan Obdržálek, Sebastian Ordyniak, Felix Reidl, Peter Rossmanith, Fernando Sánchez Villaamil, Somnath Sikdar
2013 Lecture Notes in Computer Science  
For nowhere dense graph classes, our result yields almost-linear kernels.  ...  We prove that graph problems with nite integer index have linear kernels on graphs of bounded expansion when parameterized by the size of a modulator to constant-treedepth graphs.  ...  To put this parameterization into context, let us recap some previous work on structural parameters.  ... 
doi:10.1007/978-3-642-40450-4_45 fatcat:nkzftiqszfea5e5474tuxixa2m

Laplace Graph Embedding Class Specific Dictionary Learning for Face Recognition

Li Wang, Yan-Jiang Wang, Bao-Di Liu
2018 Journal of Electrical and Computer Engineering  
Additionally, in each class dictionary training process, the LGECSDL algorithm introduces the Laplace graph embedding method to the objective function in order to keep the local structure of each class  ...  In this paper, we propose the Laplace graph embedding class specific dictionary learning (LGECSDL) algorithm, which trains a weight matrix and embeds a Laplace graph to reconstruct the dictionary.  ...  Conclusion We present a novel Laplace graph embedding class specific dictionary learning algorithm with kernels.  ... 
doi:10.1155/2018/2179049 fatcat:fhimnv25svbbbnz3x6bt7p6ck4

Graph learning under sparsity priors [article]

Hermina Petric Maretic, Dorina Thanou, Pascal Frossard
2017 arXiv   pre-print
The dictionary is constructed on polynomials of the graph Laplacian, which can sparsely represent a general class of graph signals composed of localized patterns on the graph.  ...  Graph signals offer a very generic and natural representation for data that lives on networks or irregular structures.  ...  SUPPLEMENTARY MATERIALS Complementary MATLAB code can be found on: https://github.com/Hermina/GraphLearningSparsityPriors  ... 
arXiv:1707.05587v1 fatcat:ht4743pfpjhqli6jgm3dgo4pje

Graph learning under sparsity priors

Hermina Petric Maretic, Dorina Thanou, Pascal Frossard
2017 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
The dictionary is constructed on polynomials of the graph Laplacian, which can sparsely represent a general class of graph signals composed of localized patterns on the graph.  ...  Graph signals offer a very generic and natural representation for data that lives on networks or irregular structures.  ...  SUPPLEMENTARY MATERIALS Complementary MATLAB code can be found on: https://github.com/Hermina/GraphLearningSparsityPriors  ... 
doi:10.1109/icassp.2017.7953413 dblp:conf/icassp/MareticTF17 fatcat:mzhjwylhtrgmbknlo5dh62xixy

Double linear regressions for single labeled image per person face recognition

Fei Yin, L.C. Jiao, Fanhua Shang, Lin Xiong, Shasha Mao
2014 Pattern Recognition  
Finally, DLR takes into account both the discriminating efficiency and the sparse representation structure by using the learned sparse representation regularization term as a regularization term of Linear  ...  DLR simultaneously seeks the best discriminating subspace and preserves the sparse representation structure.  ...  Then it is used in learning a more discriminative sparse representation structure. (4) Unlike SDA, there are no graph construction parameters in DLR.  ... 
doi:10.1016/j.patcog.2013.09.013 fatcat:x7mmpocuyvb65fl7bi5iyesfoq

Parametric dictionary learning for graph signals

Dorina Thanou, David I Shuman, Pascal Frossard
2013 2013 IEEE Global Conference on Signal and Information Processing  
We propose a parametric dictionary learning algorithm to design structured dictionaries that sparsely represent graph signals.  ...  We incorporate the graph structure by forcing the learned dictionaries to be concatenations of subdictionaries that are polynomials of the graph Laplacian matrix.  ...  That is, given a weighted graph and a class of signals on that graph, we want to construct an overcomplete dictionary of atoms that can sparsely represent graph signals from the given class as linear combinations  ... 
doi:10.1109/globalsip.2013.6736921 dblp:conf/globalsip/ThanouSF13 fatcat:62jymjkuengjvbdfy7fckydbpa

Multi-graph learning of spectral graph dictionaries

Dorina Thanou, Pascal Frossard
2015 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)  
We propose to learn graph atoms and build graph dictionaries that provide sparse representations for classes of signals, which share common spectral characteristics but reside on the vertices of different  ...  Such a design permits to abstract from the precise graph topology and to design dictionaries that can be trained and eventually used on different graphs.  ...  Given a class of graph signals that live on a set of weighted graphs, we want to construct an overcomplete dictionary of atoms that can sparsely represent graph signals as linear combinations of only a  ... 
doi:10.1109/icassp.2015.7178601 dblp:conf/icassp/ThanouF15 fatcat:ydiwq65bybderlfbada2tnqfga

HYPERSPECTRAL IMAGE KERNEL SPARSE SUBSPACE CLUSTERING WITH SPATIAL MAX POOLING OPERATION

Hongyan Zhang, Han Zhai, Wenzhi Liao, Liqin Cao, Liangpei Zhang, Aleksandra Pižurica
2016 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
In this paper, we present a kernel sparse subspace clustering with spatial max pooling operation (KSSC-SMP) algorithm for hyperspectral remote sensing imagery.  ...  In particular, the sparse subspace clustering (SSC) model is extended to nonlinear manifolds, which can better explore the complex nonlinear structure of hyperspectral images (HSIs) and obtain a much more  ...  Among them, the kernel strategy is one of the most commonly used and effective methods.  ... 
doi:10.5194/isprs-archives-xli-b3-945-2016 fatcat:uocmzjem2zgzjit6nsmknxpbee

Kernel Sparse Subspace Clustering with a Spatial Max Pooling Operation for Hyperspectral Remote Sensing Data Interpretation

Han Zhai, Hongyan Zhang, Xiong Xu, Liangpei Zhang, Pingxiang Li
2017 Remote Sensing  
With the help of the kernel sparse representation, a more accurate representation coefficient matrix can be obtained.  ...  However, these methods are all based on the linear representation model, which conflicts with the well-known nonlinear structure of HSIs and limits their performance to a large degree.  ...  We then construct the similarity graph using the sparse coefficient matrix [15] [16] [17] .  ... 
doi:10.3390/rs9040335 fatcat:hi3wyqthvnhkvjpcyx7dth2iie

HYPERSPECTRAL IMAGE KERNEL SPARSE SUBSPACE CLUSTERING WITH SPATIAL MAX POOLING OPERATION

Hongyan Zhang, Han Zhai, Wenzhi Liao, Liqin Cao, Liangpei Zhang, Aleksandra Pižurica
2016 The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences  
In this paper, we present a kernel sparse subspace clustering with spatial max pooling operation (KSSC-SMP) algorithm for hyperspectral remote sensing imagery.  ...  In particular, the sparse subspace clustering (SSC) model is extended to nonlinear manifolds, which can better explore the complex nonlinear structure of hyperspectral images (HSIs) and obtain a much more  ...  Among them, the kernel strategy is one of the most commonly used and effective methods.  ... 
doi:10.5194/isprsarchives-xli-b3-945-2016 fatcat:x7ssgdjarvfnlcqhrkt35xtspe

Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning

Bing Li, Chunfeng Yuan, Weihua Xiong, Weiming Hu, Houwen Peng, Xinmiao Ding, Steve Maybank
2017 IEEE Transactions on Pattern Analysis and Machine Intelligence  
Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments.  ...  The novel aspects are: (i) we propose a sparse ε-graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint  ...  View Generation Using Sparse ε-graph Using the proposed sparse ε-graph, we can generate different graphs with various parameters < λ, ε >, as: (i) ε = 0, Independent Set.  ... 
doi:10.1109/tpami.2017.2669303 pmid:28212079 fatcat:tbrnc4zwnrexjkkfizkulf75va

Coinciding Walk Kernels: Parallel Absorbing Random Walks for Learning with Graphs and Few Labels

Marion Neumann, Roman Garnett, Kristian Kersting
2013 Asian Conference on Machine Learning  
In addition to its intuitive probabilistic interpretation, coinciding walk kernels outperform existing kernel-and walk-based methods on the task of node-label prediction in sparsely labeled graphs with  ...  We also show that computing cwks is faster than many state-of-the-art kernels on graphs.  ...  using graph structure only.  ... 
dblp:conf/acml/NeumannGK13 fatcat:k5w4ub2vife4zhogqhztge5onu

Graph-Based Learning via Auto-Grouped Sparse Regularization and Kernelized Extension

Yuqiang Fang, Ruili Wang, Bin Dai, Xindong Wu
2015 IEEE Transactions on Knowledge and Data Engineering  
Thus, we propose a new method of constructing an informative graph using auto-grouped sparse regularization based on the ' 1 -graph, which is called as Group Sparse graph (GSgraph).  ...  We also show how to efficiently construct a GS-graph in reproducing kernel Hilbert space with the kernel trick.  ...  ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for their valuable and constructive comments on improving the paper.  ... 
doi:10.1109/tkde.2014.2312322 fatcat:w3nqibytxzgyzfvs2kusdzjbbq
« Previous Showing results 1 — 15 out of 32,331 results