Filters








7,063 Hits in 5.6 sec

Bermuda Triangles: GNNs Fail to Detect Simple Topological Structures [article]

Arseny Tolmachev, Akira Sakai, Masaru Todoriki, Koji Maruhashi
2021 arXiv   pre-print
equation of the mode product of a tensor X in the mixed representation with a matrix W over the last sparse mode.  ...  In the mixed representation, the mode product converts a single tensor mode from sparse to dense.  ... 
arXiv:2105.00134v1 fatcat:xdi53srahraj3kgthijj62euuu

K-clustered tensor approximation

Yu-Ting Tsai, Zen-Chung Shih
2012 ACM Transactions on Graphics  
Thus, K-CTA can be regarded as a sparse extension of CTA and a multilinear generalization of sparse representation.  ...  With the increasing demands for photo-realistic image synthesis in real time, we propose a sparse multilinear model, which is named K-Clustered Tensor Approximation (K-CTA), to efficiently analyze and  ...  K-CTA also combines the advantages of CTA and K-SVD to bridge the gap between sparse representation and tensor approximation.  ... 
doi:10.1145/2167076.2167077 fatcat:2f43z2mngvfsndnakwwyzcamde

Robust Multifactor Speech Feature Extraction Based on Gabor Analysis

Qiang Wu, Liqing Zhang, Guangchuan Shi
2011 IEEE Transactions on Audio, Speech, and Language Processing  
A multifactor analysis method is proposed to extract robust sparse features by processing the data samples in tensor structure.  ...  The objective of the sparse constraints is to preserve the statistical characteristic of clean speech data by finding projection matrices of speech subspaces and reduce the noise components which have  ...  The matrix unfolding or mode-matricizing of an -order tensor is a matrix , where . We denote the mode-matricizing of as or .  ... 
doi:10.1109/tasl.2010.2070495 fatcat:ztrdicj57jfsvp6dw52rou62ui

Efficient, Out-of-Memory Sparse MTTKRP on Massively Parallel Architectures [article]

Andy Nguyen, Ahmed E. Helal, Fabio Checconi, Jan Laukemann, Jesmin Jahan Tithi, Yongseok Soh, Teresa Ranadive, Fabrizio Petrini, Jee W. Choi
2022 arXiv   pre-print
On the latest Intel and NVIDIA GPUs, BLCO achieves 2.12-2.6X geometric-mean speedup (with up to 33.35X speedup) over the state-of-the-art mixed-mode compressed sparse fiber (MM-CSF) on a range of real-world  ...  sparse tensors.  ...  Figure 4 shows an example sparse tensor in the COO (4a) and F-COO (4b) representations.  ... 
arXiv:2201.12523v1 fatcat:e3y4w2zh3bgonpwgit4b6r7daa

Tensor-matrix products with a compressed sparse tensor

Shaden Smith, George Karypis
2015 Proceedings of the 5th Workshop on Irregular Applications Architectures and Algorithms - IA3 '15  
The bottleneck of computing the CPD is multiplying a sparse tensor by several dense matrices. Algorithms for tensormatrix products fall into two classes.  ...  The Canonical Polyadic Decomposition (CPD) of tensors is a powerful tool for analyzing multi-way data and is used extensively to analyze very large and extremely sparse datasets.  ...  0905220, OCI-1048018, CNS-1162405, IIS-1247632, IIP-1414153, IIS-1447788), Army Research Office (W911NF-14-1-0316), Intel Software and Services Group, and the Digital Technology Center at the University of  ... 
doi:10.1145/2833179.2833183 dblp:conf/sc/SmithK15 fatcat:vqe7zqdnxvdxdnzxapildpcxoq

A High-Performance Sparse Tensor Algebra Compiler in Multi-Level IR [article]

Ruiqin Tian, Luanzheng Guo, Jiajia Li, Bin Ren, Gokcen Kestor
2021 arXiv   pre-print
There are tens of storage formats designed for sparse matrices and/or tensors and the performance of sparse tensor operations depends on a particular architecture and/or selected sparse format, which makes  ...  The compiler performs code optimizations and transformations for efficient code generation while covering a wide range of tensor storage formats.  ...  Mode-Generic format [10] is a generic representation of semi-sparse tensors with one or more dense dimensions stored as dense blocks with the coordinates of the blocks stored in COO.  ... 
arXiv:2102.05187v1 fatcat:nc5bu7bgobgsxjbppkjtesxmkq

Multifactor sparse feature extraction using Convolutive Nonnegative Tucker Decomposition

Qiang Wu, Liqing Zhang, Andrzej Cichocki
2014 Neurocomputing  
In order to reduce degeneracy problem of tensor decomposition caused by component delays, convolutive tensor decomposition model is an appropriate model for exploring temporal correlations.  ...  In this paper, a flexible two stage algorithm for K-mode Convolutive Nonnegative Tucker Decomposition (K-CNTD) model is proposed using an alternating least square procedure.  ...  Shift Invariant Sparse Coding (SISC) model [29] is an extension of sparse coding to handle data from linear mixtures.  ... 
doi:10.1016/j.neucom.2013.04.049 fatcat:7yxilx7u2beh7blcmuhrmo4fru

HYPERSPECTRAL IMAGE MIXED NOISE REDUCTION BASED ON IMPROVED K-SVD ALGORITHM

S. Shajun Nisha .
2014 International Journal of Research in Engineering and Technology  
This method of denoising can efficiently remove a variety of mixed or single noise by applying sparse regularization of small image patches. It also maintains the image texture in a clear manner.  ...  We propose an algorithm for mixed noise reduction in Hyperspectral Imagery (HSI). The hyperspectral data cube is considered as a three order tensor.  ...  noise that affects the fidelity of its representation.  ... 
doi:10.15623/ijret.2014.0319151 fatcat:nlt3agkkeneznpt4qgfqwgbldy

Hyperspectral Anomaly Detection through Sparse Representation with Tensor Decomposition-based Dictionary Construction and Adaptive Weighting

Yixin Yang, Shangzhen Song, Delian Liu, Jonathan Cheung-Wai Chan, Jinze Li, Jianqi Zhang
2020 IEEE Access  
Sparse representation-based methods, as an important branch of anomaly detection (AD) technologies for hyperspectral imagery (HSI), have attracted extensive attention.  ...  INDEX TERMS Anomaly detection, background dictionary construction, hyperspectral imagery, sparse representation, tensor representation, Tucker decomposition.  ...  THIRD-ORDER TENSOR REPRESENTATION OF HSI An N -order tensor denoted by X ∈ R I 1 ×I 2 ×···×I N is a general multidimensional array, where I n (1 ≤ n ≤ N , n is the mode index) is the size of X along mode-n  ... 
doi:10.1109/access.2020.2988128 fatcat:lhsa2lqkyzdzjkmnycoyca5hxy

Robust speech feature extraction based on Gabor filtering and tensor factorization

Qiang Wu, Liqing Zhang, Guangchuan Shi
2009 2009 IEEE International Conference on Acoustics, Speech and Signal Processing  
Then Nonnegative Tensor PCA with sparse constraint is used to learn the projection matrices from multiple interrelated feature subspaces and extract the robust features.  ...  We employ 2D-Gabor functions with different scales and directions to analyze the localized patches of power spectrogram, by which speech signal can be encoded as a general higher order tensor.  ...  The sparse feature representation Su,v is obtained via the following transformation: Fig.4(a) shows an example of projection matrix in spectrotemporal domain.  ... 
doi:10.1109/icassp.2009.4960667 dblp:conf/icassp/WuZS09 fatcat:4g7z4gskrbhsjni3te4rioduu4

A Two Stage Algorithm for K-Mode Convolutive Nonnegative Tucker Decomposition [chapter]

Qiang Wu, Liqing Zhang, Andrzej Cichocki
2011 Lecture Notes in Computer Science  
We impose additional sparseness constraint on the algorithm to find the part-based representations.  ...  In this paper, we propose a flexible two stage algorithm for K-mode Convolutive Nonnegative Tucker Decomposition (K-CNTD) model by an alternating least square procedure.  ...  The mode-n matricization of an N order tensor X rearranges the elements of X to form the matrix X (n) ∈ R In×In+1In+2···IN I1···In−1 .  ... 
doi:10.1007/978-3-642-24958-7_77 fatcat:wfguzbh2e5a53mfldmmrjthtsu

Online Multilinear Dictionary Learning [article]

Thiernithi Variddhisai, Danilo Mandic
2020 arXiv   pre-print
Experiments on two synthetic signals confirms an impressive performance of our proposed method.  ...  With the assumption of separable dictionaries, tensor contraction is used to diminish a N-way model of O(L^N) into a simple matrix equation of O(NL^2) with a real-time capability.  ...  Here, a simplified accelerated first-order methods [36] , [37] are incorporated into a mode-wise coordinate descent method to derive an algorithm for tensor-based online learning of representation dictionaries  ... 
arXiv:1703.02492v5 fatcat:kpkydhka7ngvviczbnw2icrizu

Hyperspectral Image Super-Resolution via Non-local Sparse Tensor Factorization

Renwei Dian, Leyuan Fang, Shutao Li
2017 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
The sparse tensor factorization can directly decompose each cube of the HSI as a sparse core tensor and dictionaries of three modes, which reformulates the HSI super-resolution problem as the estimation  ...  of sparse core tensor and dictionaries for each cube.  ...  Parameters Discussion The maximum number of non-zero elements sparsity m has an important influence on the accuracy and efficiency of sparse coding problem.  ... 
doi:10.1109/cvpr.2017.411 dblp:conf/cvpr/DianFL17 fatcat:mr3araxxrbdrndxvxmua45fvny

Sparse Tensor Factorization on Many-Core Processors with High-Bandwidth Memory

Shaden Smith, Jongsoo Park, George Karypis
2017 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS)  
This work investigates how the novel architectural features offered by KNL can be used in the context of decomposing sparse, unstructured tensors using the canonical polyadic decomposition (CPD).  ...  To address these challenging demands, HPC systems are turning to many-core architectures that feature a large number of energy-efficient cores backed by high-bandwidth memory.  ...  While KNL does not include RTM, it is an efficient option for other parallel architectures.  ... 
doi:10.1109/ipdps.2017.84 dblp:conf/ipps/SmithPK17 fatcat:xykssjzn3vhzdjpe6uz7dcttsm

Inter-Frame Compression for Dynamic Point Cloud Geometry Coding [article]

Anique Akhtar, Zhu Li, Geert Van der Auwera
2022 arXiv   pre-print
Efficient point cloud compression is essential for applications like virtual and mixed reality, autonomous driving, and cultural heritage.  ...  We employ convolution on target coordinates to map the latent representation of the previous frame to the downsampled coordinates of the current frame to predict the current frame's feature embedding.  ...  We pass P a through a convolution of channel size 1 to obtain sparse tensor P b with features F b of shape 139,244 ˆ1.  ... 
arXiv:2207.12554v1 fatcat:klwf52rbprbxhc4shsu423xrii
« Previous Showing results 1 — 15 out of 7,063 results