Filters








16,229 Hits in 6.3 sec

Partitioning and Communication Strategies for Sparse Non-negative Matrix Factorization

Oguz Kaya, Ramakrishnan Kannan, Grey Ballard
2018 Proceedings of the 47th International Conference on Parallel Processing - ICPP 2018  
Non-negative matrix factorization (NMF), the problem of finding two non-negative low-rank factors whose product approximates an input matrix, is a useful tool for many data mining and scientific applications  ...  Key-words: sparse non-negative matrix factorization, hypergraph partitioning, parallel algorithms * For the published version of this research report, please refer to https://doi.  ...  Conclusion In this paper, we compared various partitioning and communication strategies used in the literature in the context of non-negative matrix factorization.  ... 
doi:10.1145/3225058.3225127 dblp:conf/icpp/KayaKB18 fatcat:bbi6k72rmbfyfc27peh3osa7hy

Semi-External Memory Sparse Matrix Multiplication for Billion-Node Graphs

Da Zheng, Disa Mhembere, Vince Lyzinski, Joshua T. Vogelstein, Carey E. Priebe, Randal Burns
2017 IEEE Transactions on Parallel and Distributed Systems  
We apply our SpMM to three important data analysis tasks--PageRank, eigensolving, and non-negative matrix factorization--and show that our SEM implementations significantly advance the state of the art  ...  In contrast, we scale sparse matrix multiplication beyond memory capacity by implementing sparse matrix dense matrix multiplication (SpMM) in a semi-external memory (SEM) fashion; i.e., we keep the sparse  ...  Non-negative matrix factorization Non-negative matrix factorization (NMF) [18] finds two non-negative low-rank matrices W and H to approximate a matrix A ≈ W H.  ... 
doi:10.1109/tpds.2016.2618791 fatcat:n7fc34xn4rbmfgoqhuz5tiedjy

Hypergraph Partitioning for Faster Parallel PageRank Computation [chapter]

Jeremy T. Bradley, Douglas V. de Jager, William J. Knottenbelt, Aleksandar Trifunović
2005 Lecture Notes in Computer Science  
Our results show that hypergraph-based partitioning substantially reduces communication volume over conventional partitioning schemes (by up to three orders of magnitude), while still maintaining computational  ...  It uses an iterative numerical method to compute the maximal eigenvector of a transition matrix derived from the web's hyperlink structure and a user-centred model of web-surfing behaviour.  ...  We observe that 1D and 2D hypergraph partitioning successfully reduce the communication overhead by factors of 2 and 6 respectively.  ... 
doi:10.1007/11549970_12 fatcat:2zppceyomraxri4i4fpgxywvo4

Exploiting Matrix Dependency for Efficient Distributed Matrix Computation

Lele Yu, Yingxia Shao, Bin Cui
2015 Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data - SIGMOD '15  
We next design a dependency-oriented cost model to select an optimal execution strategy for each operation, and generate a communication efficient execution plan for the matrix computation program.  ...  Distributed matrix computation is a popular approach for many large-scale data analysis and machine learning tasks.  ...  Code 1 is an illustration of the popular ML algorithm Gaussian Non-Negative Matrix Factorization [16] (GNMF). GNMF is an algorithm for finding two factor matrices, W and H, such that V ≈ W H.  ... 
doi:10.1145/2723372.2723712 dblp:conf/sigmod/YuSC15 fatcat:dvxuqclsyjakvncshggoojcqy4

ParCYCLIC: finite element modelling of earthquake liquefaction response on parallel computers

Jun Peng, Jinchi Lu, Kincho H. Law, Ahmed Elgamal
2004 International journal for numerical and analytical methods in geomechanics (Print)  
ordering strategies to minimize storage space for matrix coefficients; (c) an efficient scheme for the allocation of sparse matrix coefficients among the processors; and (d) a parallel sparse direct solver  ...  The elements of the computational strategy, designed for distributed-memory messagepassing parallel computer systems, include: (a) an automatic domain decomposer to partition finite element mesh; (b) nodal  ...  The matrix assignment strategy described partitions a sparse matrix into two basic sets: the principal diagonal block submatrices and the row segments outside the principal block submatrices.  ... 
doi:10.1002/nag.384 fatcat:s2r4jt6fazh7lmz6ptei3o7ppm

A distributed approach for accelerating sparse matrix arithmetic operations for high-dimensional feature selection

Antonela Tommasel, Daniela Godoy, Alejandro Zunino, Cristian Mateos
2016 Knowledge and Information Systems  
This work proposes a novel approach for distributing sparse-matrix arithmetic operations on computer clusters aiming at speeding-up the processing of high-dimensional matrices.  ...  Matrix computations are both fundamental and ubiquitous in computational science, and as a result they are frequently used in numerous disciplines of scientific computing and engineering.  ...  (3) Figure 2 shows an example of the Static strategy for the matrix A and an arbitrary value of 4 for the Granularity-Factor.  ... 
doi:10.1007/s10115-016-0981-5 fatcat:isawn5cyxrhuxm7hupjwz7bx34

Min Max Normalization Based Data Perturbation Method for Privacy Protection

YOGENDRA KUMAR JAIN, SANTOSH KUMAR BHANDARE
2013 International journal of computer and communication technology  
These private and sensitive data can not be share to every one, so privacy protection of data is required in data mining system for avoiding privacy leakage of data.  ...  The privacy parameters are used for measurement of privacy protection and the utility measure shows the performance of data mining technique after data distortion.  ...  ., also used non-negative matrix factorization for data perturbation [10] . They investigated the use of truncated non-negative matrix factorization (NMF) with sparseness constraints.  ... 
doi:10.47893/ijcct.2013.1201 fatcat:hcyuqaulgjasfltu5tkrbgsttu

Generic Multiplicative Methods for Implementing Machine Learning Algorithms on MapReduce [article]

Song Liu, Peter Flach, Nello Cristianini
2011 arXiv   pre-print
Two versions of large-scale matrix multiplication are discussed in this paper, and different methods are developed for both cases with regard to their unique computational characteristics and problem settings  ...  Compared with a standard implementation with computational complexity O(m^3) in the worst case, the large-scale matrix multiplication experiments prove our design is considerably more efficient and maintains  ...  Non-Negative Matrix Factorization The definition of NMF is as follows: Definition 1.  ... 
arXiv:1111.2111v2 fatcat:3kczhfqssvcs7koz7gmcvkd7fa

Parallel Algorithms for Constrained Tensor Factorization via Alternating Direction Method of Multipliers

Athanasios P. Liavas, Nicholas D. Sidiropoulos
2015 IEEE Transactions on Signal Processing  
Tensor factorization has proven useful in a wide range of applications, from sensor array processing to communications, speech and audio signal processing, and machine learning.  ...  With few recent exceptions, all tensor factorization algorithms were originally developed for centralized, in-memory computation on a single machine; and the few that break away from this mold do not easily  ...  conditional updates of the factor matrices with non-negative and/or sparse least-squares updates.  ... 
doi:10.1109/tsp.2015.2454476 fatcat:z2yhxvgnibd57ne2dkd2hwqhvi

Scalable Relational Query Processing on Big Matrix Data [article]

Yongyang Yu, Mingjie Tang, Walid G. Aref
2021 arXiv   pre-print
., relational operations for pre-processing or post-processing the dataset, and matrix operations for core model computations.  ...  Furthermore, optimized partitioning schemes for the input matrices are developed to facilitate the performance of join operations based on a cost model that minimizes the communication overhead.The proposed  ...  National Science Foundation under Grant Numbers III-1815796 and IIS-1910216.  ... 
arXiv:2110.01767v2 fatcat:rruef2pllram3ik4x7ydkxl4pq

Localized matrix factorization for recommendation based on matrix block diagonal forms

Yongfeng Zhang, Min Zhang, Yiqun Liu, Shaoping Ma, Shi Feng
2013 Proceedings of the 22nd international conference on World Wide Web - WWW '13  
We show formally that the LMF framework is suitable for matrix factorization and that any decomposable matrix factorization algorithm can be integrated into this framework.  ...  Smaller and denser submatrices are then extracted from this RBBDF matrix to construct a BDF matrix for more effective collaborative prediction.  ...  The authors thank Jun Zhu for the fruitful discussions and the reviewers for their constructive suggestions.  ... 
doi:10.1145/2488388.2488520 dblp:conf/www/ZhangZLMF13 fatcat:tsnpkpvphng3vol4a3tllq775q

An Exploration of Optimization Algorithms for High Performance Tensor Completion

Shaden Smith, Jongsoo Park, George Karypis
2016 SC16: International Conference for High Performance Computing, Networking, Storage and Analysis  
Tensor completion is most often accomplished via low-rank sparse tensor factorization, a computationally expensive non-convex optimization problem which has only recently been studied in the context of  ...  We explore opportunities for parallelism on shared-and distributed-memory systems and address challenges such as memory-and operation-efficiency, load balance, cache locality, and communication.  ...  ACKNOWLEDGMENTS The authors would like to thank anonymous reviewers for insightful feedback, Mikhail Smelyanskiy for valuable discussions, and Karlsson et al. for sharing source code used for evaluation  ... 
doi:10.1109/sc.2016.30 dblp:conf/sc/SmithPK16 fatcat:ecmrvmsngvcerjtfwlr23z3dom

Floating-point sparse matrix-vector multiply for FPGAs

Michael deLorimier, André DeHon
2005 Proceedings of the 2005 ACM/SIGDA 13th international symposium on Field-programmable gate arrays - FPGA '05  
For benchmark matrices from the Matrix Market Suite we project 1.5 double precision Gflops/FPGA for a single VirtexII-6000-4 and 12 double precision Gflops for 16 Virtex IIs (750Mflops/FPGA).  ...  Microprocessors do not deliver near their peak floating-point performance on efficient algorithms that use the Sparse Matrix-Vector Multiply (SMVM) kernel.  ...  The sparse matrix representations only explicitly represent non-zero matrix entries and only perform operations on the non-zero matrix elements.  ... 
doi:10.1145/1046192.1046203 dblp:conf/fpga/DeLorimierD05 fatcat:esan26neynashfim64rqbdylca

High-Level Strategies for Parallel Shared-Memory Sparse Matrix-Vector Multiplication

Albert-Jan Nicholas Yzelman, Dirk Roose
2014 IEEE Transactions on Parallel and Distributed Systems  
The sparse matrix-vector multiplication is an important kernel, but is hard to efficiently execute even in the sequential case.  ...  The theoretical scalability and memory usage of the various strategies are analysed, and experiments on multiple NUMA architectures confirm the validity of the results.  ...  Acknowledgements This work is funded by Intel and by the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT).  ... 
doi:10.1109/tpds.2013.31 fatcat:ubktdj7i7jbqrfdtcngtceq5wq

Non-Negative Matrix Factorizations for Multiplex Network Analysis

Vladimir Gligorijevic, Yannis Panagakis, Stefanos P. Zafeiriou
2018 IEEE Transactions on Pattern Analysis and Machine Intelligence  
In this paper, we propose Network Fusion for Composite Community Extraction (NF-CCE), a new class of algorithms, based on four different non-negative matrix factorization models, capable of extracting  ...  Networks have been a general tool for representing, analyzing, and modeling relational data arising in several domains.  ...  , statistical inference and structure-based methods, as well as method that rely on non-negative matrix factorizations: • Graph partitioning aims to group nodes into partitions such that the cut size,  ... 
doi:10.1109/tpami.2018.2821146 pmid:29993651 fatcat:643fgfsd4rhjdh37qp5qukuo5a
« Previous Showing results 1 — 15 out of 16,229 results