52,815 Hits in 8.0 sec

Reach Set Approximation through Decomposition with Low-dimensional Sets and High-dimensional Matrices [article]

Sergiy Bogomolov, Marcelo Forets, Goran Frehse, Andreas Podelski, Christian Schilling, Frédéric Viry
2018 arXiv   pre-print
We propose to decompose reach set computations such that set operations are performed in low dimensions, while matrix operations like exponentiation are carried out in the full dimension.  ...  Our method is applicable both in dense- and discrete-time settings.  ...  For this we combined high dimensional linear algebra with low dimensional set computations and a state-of-the-art reachability algorithm.  ... 
arXiv:1801.09526v1 fatcat:3vf22wugefcbpojbanzqacljui

DeepTensor: Low-Rank Tensor Decomposition with Deep Network Priors [article]

Vishwanath Saragadam, Randall Balestriero, Ashok Veeraraghavan, Richard G. Baraniuk
2022 arXiv   pre-print
DeepTensor is a computationally efficient framework for low-rank decomposition of matrices and tensors using deep generative networks.  ...  single DN equipped with 3D convolutions.  ...  ACKNOWLEDGMENTS This work was supported by NSF grants CCF-1911094, IIS-1838177, and IIS-1730574; ONR grants N00014-18-12571, N00014-20-1-2534, and MURI N00014-20-1-2787; AFOSR grant FA9550-22-1-0060; and  ... 
arXiv:2204.03145v1 fatcat:mou5x66ynvhzlcxgpd25rz6j24

A Tensor-Based Approach for Big Data Representation and Dimensionality Reduction

Liwei Kuang, Fei Hao, Laurence T. Yang, Man Lin, Changqing Luo, Geyong Min
2014 IEEE Transactions on Emerging Topics in Computing  
This paper presents a unified tensor model for big data representation and an incremental dimensionality reduction method for high-quality core set extraction.  ...  A case study illustrates that approximate data reconstructed from the core set containing 18% elements can guarantee 93% accuracy in general.  ...  TIME AND MEMORY COMPARISON Compared with the general High Order Singular Value Decomposition method, the proposed incremental High Order Singular Value Decomposition method is efficient and memory saving  ... 
doi:10.1109/tetc.2014.2330516 fatcat:t537pzpgr5exjndluh4fuuex7a

Low-Rank Dynamic Mode Decomposition: An Exact and Tractable Solution [article]

Patrick Héas, Cédric Herzet
2021 arXiv   pre-print
This work studies the linear approximation of high-dimensional dynamical systems using low-rank dynamic mode decomposition (DMD).  ...  The paper also proposes low-complexity algorithms building reduced models from this optimal solution, based on singular value decomposition or eigen value decomposition.  ...  Acknowledgements The authors thank the "Agence Nationale de la Recherche" (ANR) which partially funded this research through the GERONIMO project (ANR-13-JS03-0002).  ... 
arXiv:1610.02962v8 fatcat:yighwfelrzfh5eo6x5oers7vci

MultiVis: Content-based Social Network Exploration Through Multi-way Visual Analysis [chapter]

Jimeng Sun, Spiros Papadimitriou, Ching-Yung Lin, Nan Cao, Shixia Liu, Weihong Qian
2009 Proceedings of the 2009 SIAM International Conference on Data Mining  
With the explosion of social media, scalability becomes a key challenge.  ...  In particular, we propose 1) an analytic data model for content-based networks using tensors; 2) an efficient high-order clustering framework for analyzing the data; 3) a scalable context-sensitive graph  ...  Acknowledgement We are pleased to acknowledge Brett Bader and Tamara Kolda from Sandia National lab for providing the tensor toolbox [5] which makes our implementation and experiments an easy job.  ... 
doi:10.1137/1.9781611972795.91 dblp:conf/sdm/SunPLCLQ09 fatcat:y7hhimdhmfbc3espotmb676kd4

High-dimensional uncertainty quantification for an electrothermal field problem using stochastic collocation on sparse grids and tensor train decompositions

Dimitrios Loukrezis, Ulrich Römer, Thorben Casper, Sebastian Schöps, Herbert De Gersem
2017 International journal of numerical modelling  
Possible remedies to this, so-called, curse of dimensionality are sought in the application of stochastic collocation (SC) on sparse grids (SGs) and of the recently emerged low-rank tensor decomposition  ...  methods, with emphasis on the tensor train (TT) decomposition.  ...  The difficulty arising when approximating high-dimensional problems is called the curse of dimensionality [5] .  ... 
doi:10.1002/jnm.2222 fatcat:g7xsktousfafrh7whnq6jvzgxu

Common component analysis for multiple covariance matrices

Huahua Wang, Arindam Banerjee, Daniel Boley
2011 Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining - KDD '11  
The traditional approach for finding accurate low dimensional approximation to high dimensional covariance matrices is Principal Component Analysis (PCA) [14, 4].  ...  The key hypothesis driving our analysis is that the high-dimensional covariance matrices are indeed a linearly transformed version of a set of low dimensional covariance matrices Yt ∈ Rr×r , 1 ≤ t ≤ T  ... 
doi:10.1145/2020408.2020565 dblp:conf/kdd/WangBB11 fatcat:hzqgdtkufnavhhtuhyho5w52fm

Generalized Kernel-Based Dynamic Mode Decomposition [article]

Patrick Heas, Cedric Herzet, Benoit Combes
2020 arXiv   pre-print
Reduced modeling in high-dimensional reproducing kernel Hilbert spaces offers the opportunity to approximate efficiently non-linear dynamics.  ...  In this work, we devise an algorithm based on low rank constraint optimization and kernel-based computation that generalizes a recent approach called "kernel-based dynamic mode decomposition".  ...  Acknowledgements This work was supported by the French Agence Nationale de la Recherche through the BECOSE Project.  ... 
arXiv:2002.04375v1 fatcat:ypzmdwyprvg4hfx7muiovmvne4

Reduced Basis Decomposition: a Certified and Fast Lossy Data Compression Algorithm [article]

Yanlai Chen
2015 arXiv   pre-print
Given X the high-dimensional data, the method approximates it by Y T (≈ X) with Y being the low-dimensional surrogate and T the transformation matrix.  ...  The first, projective methods, builds an explicit linear projection from the high-dimensional space to the low-dimensional one.  ...  Moreover, its low-dimensional vectors are equipped with error estimator indicating how close they are approximating the high-dimensional data.  ... 
arXiv:1503.05947v1 fatcat:cpdezip44zcjddornx3fugu4gm

Hybrid Kronecker Product Decomposition and Approximation [article]

Chencheng Cai and Rong Chen and Han Xiao
2019 arXiv   pre-print
As an effective dimension reduction tool, singular value decomposition is often used to analyze high dimensional matrices, which are traditionally assumed to have a low rank matrix approximation.  ...  We assume a high dimensional matrix can be approximated by a sum of a small number of Kronecker products of matrices with potentially different configurations, named as a hybird Kronecker outer Product  ...  Introduction High dimensional data often has low dimensional structure that allows significant dimension reduction and compression.  ... 
arXiv:1912.02955v1 fatcat:mljs5cjxbzf25hk2hsl3zstps4

Geometric structure of sum-of-rank-1 decompositions for n-dimensional order-p symmetric tensors

Olexiy Kyrgyzov, Deniz Erdogmus
2008 2008 IEEE International Symposium on Circuits and Systems  
develop a set of structured-bases that can be utilized to decompose any symmetric tensor into its sum-of-rank-one (canonical) decomposition.  ...  The canonical sum-of-rank-one decomposition of tensors is a fundamental linear algebraic problem encountered in signal processing, machine learning, and other scientific fields.  ...  TENSORS AND LOW-RANK APPROXIMATIONS The term tensor has different definitions in physics and mathematics.  ... 
doi:10.1109/iscas.2008.4541674 dblp:conf/iscas/KyrgyzovE08 fatcat:sl7zydc3ofasjarvaicor6m3ca

Reduced basis decomposition: A certified and fast lossy data compression algorithm

Yanlai Chen
2015 Computers and Mathematics with Applications  
Given X the high-dimensional data, the method approximates it by Y T (≈ X) with Y being the low-dimensional surrogate and T the transformation matrix.  ...  The first, projective methods, builds an explicit linear projection from the high-dimensional space to the low-dimensional one.  ...  Moreover, its low-dimensional vectors are equipped with error estimator indicating how close they are approximating the high-dimensional data.  ... 
doi:10.1016/j.camwa.2015.09.023 fatcat:66ybzreekfdwnmruwynh6vvrke

Feature Extraction and Classification using Leading Eigenvectors: Applications to Biomedical and Multi-Modal mHealth Data

Georgina Cosma, T. Martin McGinnity
2019 IEEE Access  
This paper demonstrates that leading eigenvectors derived from singular value decomposition (SVD) and Nyström approximation methods can be utilized for classification tasks without the need to construct  ...  Experiments were conducted with 14 biomedical datasets to compare classifier performance when taking as input into a classifier matrices containing: 1) leading eigenvectors which result from each approximation  ...  One approach to reduce computational complexity is to perform low-rank approximation in order to reduce the dimensionality of the matrices. B.  ... 
doi:10.1109/access.2019.2932868 fatcat:dcayyqgdvzemphnda6kj6vbcni

Non-Linear Reduced Modeling by Generalized Kernel-Based Dynamic Mode Decomposition [article]

Patrick Héas and Cédric Herzet and Benoit Combès
2020 arXiv   pre-print
Reduced modeling with this algorithm reveals a gain in approximation accuracy, as shown by numerical simulations, and in complexity with respect to existing approaches.  ...  In this work, we propose to achieve such an approximation by first embedding the trajectories in a reproducing kernel Hilbert space (RKHS), which exhibits appealing approximation and computational capabilities  ...  High-Dimensional Model and Experimental Setting.  ... 
arXiv:1710.10919v5 fatcat:in5d36wizjef3hvwolaxu7fgbi

A Method for Extracting High-Quality Core Data from Edge Computing Nodes

Yanping Chen, Mingdao Zhao, Hong Xia, Xiaodong Jin, Zhongmin Wang, Zhong Yu
2019 Mathematical Problems in Engineering  
uses the incremental decomposition algorithm to extract high-quality core data.  ...  Experiments show that the approximate tensor reconstructed from the tensor with 15% core data can guarantee 90% accuracy.  ...  How to effectively extract high-quality core data from large-scale low-quality data sets is a severe challenge for big data analysis.  ... 
doi:10.1155/2019/3834846 fatcat:cmtwuprgtrap5djrowxjq5xile
« Previous Showing results 1 — 15 out of 52,815 results