Filters








22,837 Hits in 4.2 sec

Array language support for parallel sparse computation

Bradford L. Chamberlain, Lawrence Snyder
2001 Proceedings of the 15th international conference on Supercomputing - ICS '01  
Ò @ û 6± ü ±û · ¬ ± ® S · ¬ ®a°μ°ù ³ « ¶ ¬ 9° Ầ ý ¦ þ ( ± ¬ ± ü ³ ± ± ü ¬ 9 ÿ°³ « ( ³ ® ®¨°© ¬°± ¨® ¨««¬ ³ « ¡ ¬ · ³ ¶ ®¬ ± ¬ ú¬ ® ± @ Å Ò ¶ ®¨° ¨°μ· ¬ ® °± 6°· « ¶ µ ¬ µ S° 9± ® 6 µ ¬ ÿ°° ¬° À Ò ý Array  ...  SPARSE REGIONS 4.1 Motivation ý Ä ½ À È p ˼ ½ Ǽ Â½È ¡ ÊÂà ¥ ÂËÅ ËÂÀ Ä ¾ R ËÀ ae ç G ä G ÷¾ R AE½Ð ÂÀ Ä ¾ R ¾ R ˼ Å Ë ¥ ˼ ½Ì à ¥ Ñ ¾Ë p É ½ & AE½ ÇËÅ Ä Ð Ñ ÊÅ AE & Å Ä Á AE½Ð Ñ ÊÅ AE ¥ þ˼ À Ñ Ð ¼  ... 
doi:10.1145/377792.377820 dblp:conf/ics/ChamberlainS01 fatcat:zb3tjb64kvelrefe4fi5g2pzfy

Efficient support of parallel sparse computation for array intrinsic functions of Fortran 90

Rong-Guey Chang, Tyng-Ruey Chuang, Jenq Kuen Lee
1998 Proceedings of the 12th international conference on Supercomputing - ICS '98  
Keywords: array intrinsic functions of Fortran 90, non-zero element, parallel sparse computation, parallel sparse support, complexity analysis  ...  Our work, to our best knowledge, is the first work to give sparse and parallel sparse supports for array intrinsics of Fortran 90.  ...  In addition, we would like to acknowledge the National Center for High-Performance Computing, Taiwan, for providing access to IBM SP-2 machines.  ... 
doi:10.1145/277830.277845 dblp:conf/ics/ChangCL98 fatcat:ufdkjc2iofa6bhzzahpydaomym

Data-parallel support for numerical irregular problems

E.L. Zapata, O. Plata, R. Asenjo, G.P. Trabado
1999 Parallel Computing  
Second, irregular data structures, derived from computations involving sparse matrices, grids, trees, graphs, etc.  ...  This paper discusses the eective parallelization of numerical irregular codes, focusing on the de®nition and use of data-parallel extensions to express the parallelism that they exhibit.  ...  We also thank Ian Du and all members in the parallel algorihm team at CERFARCS, Tolouse (France), for their kindly help and collaboration, as well as Sùren Toxvaerd, at the Department of Chemistry, University  ... 
doi:10.1016/s0167-8191(99)00090-3 fatcat:ymeizehfevcfvkrivsnce2hfs4

Irregular Computations in Fortran – Expression and Implementation Strategies

Jan F. Prins, Siddhartha Chatterjee, Martin Simons
1999 Scientific Programming  
Modern dialects of Fortran enjoy wide use and good support on high‐performance computers as performance‐oriented programming languages.  ...  For experimental validation of these techniques, we explore nested data‐parallel implementations of the sparse matrix‐vector product and the Barnes–Hut n‐body algorithm by hand‐coding thread‐based (using  ...  This research was supported in part by NSF Grants #CCR-9711438 and #INT-9726317. Chatterjee is supported in part by NSF CAREER Award #CCR-9501979.  ... 
doi:10.1155/1999/607659 fatcat:a7qljvoccfhxhit5o5lja5f7hi

CCA-LISI: On Designing A CCA Parallel Sparse Linear Solver Interface

Fang Liu, Randall Bramley
2007 2007 IEEE International Parallel and Distributed Processing Symposium  
Sparse linear solvers account for much of the execution time in many high-performance computing (HPC) applications, and not every solver works on all problems.  ...  This work is part of the Common Component Architecture (CCA) [27] effort on designing common interface among various parallel high performance linear solver libraries, hence componenizing them and enabling  ...  Acknowledgments This work is supported in part by National Science Foundation Grants EIA-0202048, MRI CDA-0116050, and the DoE Office of Science's Center for Component Technology for Terascale Simulation  ... 
doi:10.1109/ipdps.2007.370224 dblp:conf/ipps/LiuB07 fatcat:lapwm5a54zdlhegrzkjc64a37i

Expressing Irregular Computations in Modern Fortran Dialects [chapter]

Jan F. Prins, Siddhartha Chatterjee, Martin Simons
1998 Lecture Notes in Computer Science  
Modern dialects of Fortran enjoy wide use and good support on highperformance computers as performance-oriented programming languages.  ...  By providing the ability to express nested data parallelism in Fortran, we enable irregular computations to be incorporated into existing applications with minimal rewriting and without sacrificing performance  ...  This research was supported in part by NSF Grants #CCR-9711438 and #INT-9726317. Chatterjee is supported in part by NSF CAREER Award #CCR-9501979.  ... 
doi:10.1007/3-540-49530-4_1 fatcat:7u5msbfdu5eq7mdqq76pziere4

Vienna-Fortran/HPF extensions for sparse and irregular problems and their compilation

M. Ujaldon, E.L. Zapata, B.M. Chapman, H.P. Zima
1997 IEEE Transactions on Parallel and Distributed Systems  
The overall result is a powerful mechanism for dealing efficiently with sparse matrices in data parallel languages and their compilers for DMMPs.  ...  Together with the data distribution for the matrix, this enables the compiler and runtime system to translate sequential sparse code into explicitly parallel message-passing code.  ...  ), and by the Austrian Ministry for Science and Research (BMWF Grant GZ 308.9281-IV/3/93).  ... 
doi:10.1109/71.629489 fatcat:li5kogrrwba2tiaug6w3bipe3e

A High-Performance Sparse Tensor Algebra Compiler in Multi-Level IR [article]

Ruiqin Tian, Luanzheng Guo, Jiajia Li, Bin Ren, Gokcen Kestor
2021 arXiv   pre-print
We propose a tensor algebra domain-specific language (DSL) and compiler infrastructure to automatically generate kernels for mixed sparse-dense tensor algebra operations, named COMET.  ...  parallel SpMV, SpMM, and TTM over TACO, respectively.  ...  For small computation, LLVM co-routines introduce less overhead than OpenMP threading (which is beneficial for larger parallel regions).  ... 
arXiv:2102.05187v1 fatcat:nc5bu7bgobgsxjbppkjtesxmkq

Data parallel Haskell

Manuel M. T. Chakravarty, Roman Leshchinskiy, Simon Peyton Jones, Gabriele Keller, Simon Marlow
2007 Proceedings of the 2007 workshop on Declarative aspects of multicore architectures - DAMP '07  
Our current aim is to provide a convenient programming environment for SMP parallelism, and especially multicore architectures.  ...  We extended the original programming model and its implementation, both of which were first popularised by the NESL language, in terms of expressiveness as well as efficiency.  ...  with existing support for two forms of more explicit parallel programming.  ... 
doi:10.1145/1248648.1248652 dblp:conf/popl/ChakravartyLJKM07 fatcat:7knk4yiqmnbqlacv47gfxofiue

Using R for Iterative and Incremental Processing

Shivaram Venkataraman, Indrajit Roy, Alvin AuYoung, Robert S. Schreiber
2012 USENIX Workshop on Hot Topics in Cloud Computing  
We argue that array-based languages, like R [1], are ideal to express these algorithms, and we should extend these languages for processing in the cloud.  ...  Many of these algorithms are, by nature, iterative and perform incremental computations, neither of which are efficiently supported by current frameworks.  ...  This goal forces us to ask: what kind of memory management and runtime support is required for scaling an array-based language like R? Sparse datasets. Most real-world datasets are sparse.  ... 
dblp:conf/hotcloud/VenkataramanRAS12 fatcat:mgpn3xv6anh4nnu3uuqvptx7um

Parallel Programmability and the Chapel Language

B.L. Chamberlain, D. Callahan, H.P. Zima
2007 The international journal of high performance computing applications  
language and its ongoing implementation.  ...  Thanks also to Robert Bocchino, Paul Cassella, and Greg Titus for reading and providing valuable feedback on drafts of this article.  ...  ZPL supports array computations using a language concept known as a region 3 to represent distributed index sets, including sparse arrays [11, 12, 9] .  ... 
doi:10.1177/1094342007078442 fatcat:uvgfhdkzwfd6nd3tnajicopl4e

Evaluating the Impact of Programming Language Features on the Performance of Parallel Applications on Cluster Architectures [chapter]

Konstantin Berlin, Jun Huan, Mary Jacob, Garima Kochhar, Jan Prins, Bill Pugh, P. Sadayappan, Jaime Spacco, Chau-Wen Tseng
2004 Lecture Notes in Computer Science  
We evaluate the impact of programming language features on the performance of parallel applications on modern parallel architectures, particularly for the demanding case of sparse integer codes.  ...  To avoid large reductions in performance, language features must avoid interfering with compiler optimizations for local computations.  ...  Conclusions In this paper, we examined language features from a number of parallel programming paradigm/languages (MPI, UPC, OpenMP, Java, C/Pthreads, Global Arrays) for their performance and ease of use  ... 
doi:10.1007/978-3-540-24644-2_13 fatcat:js24djykkfhohk2gmc2m4dmbdu

An extensible global address space framework with decoupled task and data abstractions

Sriram Krishnamoorthy, Umit Catalyurek, Jarek Nieplocha, Atanas Rountev, P. Sadayappan
2006 Proceedings 20th IEEE International Parallel & Distributed Processing Symposium  
The use of the framework for implementation of parallel block-sparse tensor computations in the context of a quantum chemistry application is illustrated.  ...  It is particularly challenging to achieve high performance using global-addressspace languages for unstructured applications with irregular data structures.  ...  Acknowledgments We thank the National Science Foundation for the support of this research through grants 0121676, 0403342, and 0509467, and the U.S.  ... 
doi:10.1109/ipdps.2006.1639577 dblp:conf/ipps/KrishnamoorthyCNRS06 fatcat:ofshuvpcqbaidmh4tljzvn67aq

Dynamic Sparse Tensor Algebra Compilation [article]

Stephen Chou, Saman Amarasinghe
2021 arXiv   pre-print
We propose a language for precisely specifying recursive, pointer-based data structures, and we show how this language can express a wide range of dynamic data structures that support efficient modification  ...  Furthermore, our technique outperforms PAM, a parallel ordered (key-value) maps library, by 7.40× when used to implement element-wise addition of a dynamic sparse matrix to a static sparse matrix.  ...  Acknowledgments This work was supported by the Application Driving Architectures (ADA) Research Center, a JUMP Center co-sponsored by SRC and DARPA; the U.S.  ... 
arXiv:2112.01394v1 fatcat:blwlfyandbdajmrql4r6cabkmy

Generating fast sparse matrix vector multiplication from a high level generic functional IR

Federico Pizzuti, Michel Steuwer, Christophe Dubach
2020 Proceedings of the 29th International Conference on Compiler Construction  
In this paper, we extend a generic high-level IR to enable efficient computation with sparse data structures.  ...  We use a form of dependent types to model sparse matrices in CSR format by expressing the relationship between multiple dense arrays explicitly separately storing the length of rows, the column indices  ...  Acknowledgments This work was supported by the Engineering and Physical Sciences Research Council (grant EP/L01503X/1), EPSRC Centre for Doctoral Training in Pervasive Parallelism at the University of  ... 
doi:10.1145/3377555.3377896 dblp:conf/cc/PizzutiSD20 fatcat:qwxr66wqcrb5zlcnkvndnjka64
« Previous Showing results 1 — 15 out of 22,837 results