Filters








335,539 Hits in 3.0 sec

Efficient data structures for sparse network representation

Jörkki Hyvönen, Jari Saramäki, Kimmo Kaski
2008 International Journal of Computer Mathematics  
In this Article, we present a cache efficient data structure, a variant of a linear probing hash table, for representing edge sets of such networks.  ...  Modern-day computers are characterized by a striking contrast between the processing power of the CPU and the latency of main memory accesses.  ...  The library will be in the form of an extension to a scripting language, the basic structures and algorithms implemented in a low-level language for performance.  ... 
doi:10.1080/00207160701753629 fatcat:gxva4pkgijhibhlh2oyo76p5ca

Prover Efficient Public Verification of Dense or Sparse/Structured Matrix-Vector Multiplication [chapter]

Jean-Guillaume Dumas, Vincent Zucca
2017 Lecture Notes in Computer Science  
With the emergence of cloud computing services, computationally weak devices (Clients) can delegate expensive tasks to more powerful entities (Servers).  ...  The obtained algorithms are essentially optimal in the amortized model: the overhead for the Server is limited to a very small constant factor, even in the sparse or structured matrix case; and the computational  ...  exist an efficient algorithm to compute e(g 1 , g 2 ).  ... 
doi:10.1007/978-3-319-59870-3_7 fatcat:gckbmnfyafbljgwsxuda4qtlia

Tree-based Space Efficient Formats for Storing the Structure of Sparse Matrices

I. Simecek, D. Langr
2014 Scalable Computing : Practice and Experience  
Sparse storage formats describe a way how sparse matrices are stored in a computer memory.  ...  Extensive research has been conducted about these formats in the context of performance optimization of the sparse matrix-vector multiplication algorithms, but memory efficient formats for storing sparse  ...  Tree-based Space Efficient Formats for Storing the Structure of Sparse MatricesSo, if matrix A is stored in the MBT format, 20 bits are needed for representing its structure. 7 Algorithm 1 Procedure  ... 
doi:10.12694/scpe.v15i1.962 fatcat:rf4md6zoynegvawpqepfwodgjm

HCMB: A stable and efficient algorithm for processing the normalization of highly sparse Hi-C contact data

Honglong Wu, Xuebin Wang, Mengtian Chu, Dongfang Li, Lixin Cheng, Ke Zhou
2021 Computational and Structural Biotechnology Journal  
Especially, the problem of high sparsity puts forward a huge challenge on the correction, indicating the urgent need for a stable and efficient method for Hi-C data normalization.  ...  Normalization is a critical pre-processing step of downstream analyses for the elimination of systematic and technical biases from chromatin contact matrices due to different mappability, GC content, and  ...  In summary, the HCMB algorithm achieves comparable computational efficiency in matrix balancing as well as the KR method.  ... 
doi:10.1016/j.csbj.2021.04.064 pmid:34025950 pmcid:PMC8120939 fatcat:gnooygebobfrlnvtyqxono2fdq

Dynamic Sparse Tensor Algebra Compilation [article]

Stephen Chou, Saman Amarasinghe
2021 arXiv   pre-print
the results of sparse tensor algebra computations in dynamic data structures.  ...  This paper shows how to generate efficient tensor algebra code that compute on dynamic sparse tensors, which have sparsity structures that evolve over time.  ...  Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Award Numbers DE-SC0008923 and DE-SC0018121; and DARPA under Awards HR0011-18-3-0007 and HR0011-20-9-0017  ... 
arXiv:2112.01394v1 fatcat:blwlfyandbdajmrql4r6cabkmy

Computation on Sparse Neural Networks: an Inspiration for Future Hardware [article]

Fei Sun, Minghai Qin, Tianyun Zhang, Liu Liu, Yen-Kuang Chen, Yuan Xie
2020 arXiv   pre-print
We observe that the search for the sparse structure can be a general methodology for high-quality model explorations, in addition to a strategy for high-efficiency model execution.  ...  In this paper, we summarize the current status of the research on the computation of sparse neural networks, from the perspective of the sparse algorithms, the software frameworks, and the hardware accelerations  ...  The efforts on the sparse computation frameworks and the sparse hardware accelerators mainly try to improve the computation efficiency, i.e. reducing the amount of computation from a large and dense model  ... 
arXiv:2004.11946v1 fatcat:2lnbtmi4grb65nxcxab4kz6pvy

Distributed Sparse Matrices for Very High Level Languages [chapter]

John R. Gilbert, Steve Reinhardt, Viral B. Shah
2008 Advances in Computers  
Parallel computing is becoming ubiquitous, specifically due to the advent of multi-core architectures.  ...  We describe the design and implementation of a sparse matrix infrastructure for Star-P, a parallel implementation of the Matlab R programming language.  ...  We conclude that distributed sparse matrices provide a powerful set of primitives for numerical and combinatorial computing.  ... 
doi:10.1016/s0065-2458(08)00005-3 fatcat:hienrbxdu5hdjbnadvnulvl7ku

High-performance sparse matrix-vector multiplication on GPUs for structured grid computations

Jeswin Godwin, Justin Holewinski, P. Sadayappan
2012 Proceedings of the 5th Annual Workshop on General Purpose Processing with Graphics Processing Units - GPGPU-5  
We develop efficient sparse matrix-vector multiplication for structured grid computations on GPU architectures using CUDA [25] .  ...  In this paper, we address efficient sparse matrix-vector multiplication for matrices arising from structured grid problems with high degrees of freedom at each grid node.  ...  This work was supported in part by the National Science Foundation through award 0926688 and by the Department of Energy (subcontract to The Ohio State University from RNET Technologies; DOE award DE-SC0002434  ... 
doi:10.1145/2159430.2159436 dblp:conf/asplos/GodwinHS12 fatcat:snpuv57vajekxd7twvi3jt7nua

Comparison of Vector Operations of Open-Source Linear Optimization Kernels

2018 Acta Polytechnica Hungarica  
Finally a computational study is performed comparing the performance of vector operations of different linear optimization kernels to validate the high efficiency of our kernel.  ...  An important field of optimization is linear optimization which is very widely used. It is also often the hidden computational engine behind algorithms of other fields of optimization.  ...  Conclusions The linear algebraic kernel of the Pannon Optimizer was developed, based on results of the performance analysis of sparse data structures and with consideration of computationally heavy simplex-specific  ... 
doi:10.12700/aph.15.1.2018.1.4 fatcat:gqalnlhleja4bpgg2svuresj2m

Efficient quantum circuits for arbitrary sparse unitaries

Stephen P. Jordan, Pawel Wocjan
2009 Physical Review A. Atomic, Molecular, and Optical Physics  
One can formulate a model of computation based on the composition of sparse unitaries which includes the quantum Turing machine model, the quantum circuit model, anyonic models, permutational quantum computation  ...  However, we show that quantum circuits can efficiently implement any unitary provided it has at most polynomially many nonzero entries in any row or column, and these entries are efficiently computable  ...  Conversely, quantum gates are row-sparse, column-sparse, row-computable, and column-computable unitaries due to their tensor product structure. Thus, the sparse unitary model is equivalent to BQP.  ... 
doi:10.1103/physreva.80.062301 fatcat:3lg2yjwg2rdilh4n4gnbeq56ee

Technical Note: Improving the computational efficiency of sparse matrix multiplication in linear atmospheric inverse problems

Vineet Yadav, Anna M. Michalak
2016 Geoscientific Model Development Discussions  
Matrix multiplication of two sparse matrices is a fundamental operation in linear Bayesian inverse problems for computing covariance matrices of observations and <i>a posteriori</i> uncertainties.  ...  Here we present a hybrid-parallel sparse-sparse matrix multiplication approach that is more efficient by a third in terms of execution time and operation count relative to standard sparse matrix multiplication  ...  Introduction Sparse-Sparse (SS) matrix multiplication forms the computational backbone of scientific computation in many fields.  ... 
doi:10.5194/gmd-2016-204 fatcat:u5qutmnqabbyrfbtk5dfol27se

A Highly Efficient Implementation of Multiple Precision Sparse Matrix-Vector Multiplication and Its Application to Product-type Krylov Subspace Methods [article]

Tomonori Kouya
2014 arXiv   pre-print
We evaluate the performance of the Krylov subspace method by using highly efficient multiple precision sparse matrix-vector multiplication (SpMV).  ...  sparse matrix collections in a memory-restricted computing environment.  ...  The results of the numerical experiments show that preconditioning in multiple precision computation is not efficient due to the effect of the matrix structure and other such factors, if it performs better  ... 
arXiv:1411.2377v1 fatcat:mld2ljfc4ndjdf5rupyycd4l2i

Exploiting sparse Markovandcovariance structure in multiresolution models

Myung Jin Choi, Venkat Chandrasekaran, Alan S. Willsky
2009 Proceedings of the 26th Annual International Conference on Machine Learning - ICML '09  
We propose a new class of Gaussian MR models that capture the residual correlations within each scale using sparse covariance structure.  ...  This model leads to an efficient, new inference algorithm that is similar to multipole methods in computational physics.  ...  Evaluating the right-hand side only involves multiplications of a sparse matrix Σ c and a vector, sox new can be computed efficiently.  ... 
doi:10.1145/1553374.1553397 dblp:conf/icml/ChoiCW09 fatcat:2hsm2ss3gbhuvpyexvlzyh5q6q

Reduce the rank calculation of a high-dimensional sparse matrix based on network controllability theory [article]

Chen Zhao, Yuqing Liu, Li Hu, Zhengzhong Yuan
2022 arXiv   pre-print
Our method offers an efficient pathway to quickly estimate the rank of the high-dimensional sparse matrix when the time cost of computing the rank by SVD is unacceptable.  ...  Notwithstanding recent advances in the promotion of traditional singular value decomposition (SVD), an efficient estimation algorithm for the rank of a high-dimensional sparse matrix is still lacking.  ...  This work is supported by the National Natural Science Foundation of China (Grant Nos. 61703136 and 61672206), the Natural Science Foundation of Hebei (Grant Nos.  ... 
arXiv:2110.13146v2 fatcat:f2taidotgzaifbsuhuuq3rnaq4

Performance Optimization for Sparse AtAx in Parallel on Multicore CPU

Yuan TAO, Yangdong DENG, Shuai MU, Zhenzhong ZHANG, Mingfa ZHU, Limin XIAO, Li RUAN
2014 IEICE transactions on information and systems  
The sparse matrix operation, y ← y+A t Ax, where A is a sparse matrix and x and y are dense vectors, is a widely used computing pattern in High Performance Computing (HPC) applications.  ...  Experiments show that our technique outperforms the Compressed Sparse Row (CSR) based solution in POSKI by up to 2.5 fold on over 70% of benchmarking matrices. key words: sparse A t Ax, compressed sparse  ...  [4] introduced the CSB to store a sparse matrix to enable efficient computations of both Ax and A t x.  ... 
doi:10.1587/transinf.e97.d.315 fatcat:bex7ierbfvfyrjedfi2dfwwbye
« Previous Showing results 1 — 15 out of 335,539 results