Filters








36,651 Hits in 6.2 sec

Space and time efficient execution of parallel irregular computations

Cong Fu, Tao Yang
1997 SIGPLAN notices  
In this paper, issues of efficient execution of overhead-sensitive parallel irregular computation under memory constraints are addressed.  ...  The irregular parallelism is modeled by task dependence graphs with mixed granularities. The trade-off in achieving both time and space efficiency is investigated.  ...  This paper addresses issues of efficient parallel execution of irregular computation under a limited memory capacity on each processor and investigates the trade-off between time and space efficiency since  ... 
doi:10.1145/263767.263773 fatcat:fe7tm53q6re6tn4vg54u4f7llm

Parallelizing irregular algorithms

Pedro Monteiro, Miguel P. Monteiro, Keshav Pingali
2011 Proceedings of the 18th Conference on Pattern Languages of Programs - PLoP '11  
Outside of the high-performance computing domain, many applications are irregular in the sense that opportunities to exploit parallelism change throughout the computation, due to the use of complex, pointer-based  ...  However, the parallel programming community has relatively little experience in parallelizing irregular applications, and we presently lack a deep understanding of the structure of parallelism and locality  ...  This work was partially supported by project PRIA -Parallel Refinements for Irregular Applications (UTAustin/CA/0056/ 2008) funded by Portuguese FCT/MCTES and FEDER.  ... 
doi:10.1145/2578903.2579141 dblp:conf/plop/MonteiroMP11 fatcat:de35ssngtvfozdzjn4fz6pprka

Solving irregularly structured problems based on distributed object model

Yudong Sun, Cho-Li Wang
2003 Parallel Computing  
The model creates an adaptive computing infrastructure for developing and executing irregular applications on distributed systems.  ...  With the rapid advance of high-performance computers and networking technologies, distributed systems have been providing cost-effective environment for the parallel and distributed computing of large-scale  ...  The IPA (Irregular Parallel Algorithms) project proposes nested data parallelism to express the irregular computations and investigates the incorporation of nested data parallelism in the programming languages  ... 
doi:10.1016/j.parco.2003.05.006 fatcat:3f5cdh27vvevvcswf2ipod6iwa

Combining Performance Aspects of Irregular Gauss-Seidel Via Sparse Tiling [chapter]

Michelle Mills Strout, Larry Carter, Jeanne Ferrante, Jonathan Freeman, Barbara Kreaseck
2005 Lecture Notes in Computer Science  
The most time consuming part of multigrid is the iterative smoother, such as Gauss-Seidel.  ...  Current methods for parallelizing Gauss-Seidel on irregular grids, such as multi-coloring and ownercomputes based techniques, exploit parallelism and possibly intra-iteration data reuse but not inter-iteration  ...  We used Rational PurifyPlus as part of the SEED program.  ... 
doi:10.1007/11596110_7 fatcat:jdju7gm5afetviaenxtrhmc644

Region-based parallelization of irregular reductions on explicitly managed memory hierarchies

Seonggun Kim, Hwansoo Han, Kwang-Moo Choe
2009 Journal of Supercomputing  
Irregular reduction is one of important computation patterns for many complex scientific applications, and it typically requires high performance and large bandwidth of memory.  ...  To relieve the burden of memory management from programmers, we develop abstractions, particularly targeted to irregular reduction, for structuring parallel tasks, mapping the parallel tasks to processing  ...  Table 5 lists the measured execution times and Fig. 6 shows the speedup of the parallelized versions over the sequential code.  ... 
doi:10.1007/s11227-009-0340-3 fatcat:tswiy42kw5hvhiclwlqfukuyri

A pattern language for parallelizing irregular algorithms

Pedro Monteiro, Miguel P. Monteiro
2010 Proceedings of the 2010 Workshop on Parallel Programming Patterns - ParaPLoP '10  
This paper presents the first part of a pattern language for creating parallel implementations of irregular algorithms and applications.  ...  This class of algorithms tends to organize computations in terms of data locality instead of parallelizing control in multiple threads.  ...  Having stated that, we further observe that traditional approaches to parallelization cannot be efficiently mapped to the unpredictable run-time behavior of irregular algorithms and applications.  ... 
doi:10.1145/1953611.1953624 fatcat:oi5g6iro6bckjjn5w7fky5xdri

Executing irregular scientific applications on stream architectures

Mattan Erez, Jung Ho Ahn, Jayanth Gummaraju, Mendel Rosenblum, William J. Dally
2007 Proceedings of the 21st annual international conference on Supercomputing - ICS '07  
We study four representative sub-classes of irregular algorithms, including finiteelement and finite-volume methods for modeling physical systems, direct methods for n-body problems, and computations involving  ...  These codes have irregular structures where nodes have a variable number of neighbors, resulting in irregular memory access patterns and irregular control.  ...  DR IL has a 7% execution time increase and the higher computational time overhead of DR XL results in a slowdown of 24%. With no cache, removing duplicates can improve performance.  ... 
doi:10.1145/1274971.1274987 dblp:conf/ics/ErezAGRD07 fatcat:c5koegpe2fg6xm2cmwl44vwyze

The Paradigm compiler for distributed-memory multicomputers

P. Banerjee, J.A. Chandy, M. Gupta, E.W. Hodges, J.G. Holm, A. Lain, D.J. Palermo, S. Ramaswamy, E. Su
1995 Computer  
A unified approach efficiently supports regular and irregular computations using data and functional parallelism.  ...  However, to harness these machines' computational power, users must write efficient software. This process is laborious because of the absence of global address space.  ...  .-~~ This research was supported in part by the Office of Naval Research under Contract N00014-91J-1096, the National Aeronautics and Space Administration under Contract NASA NAG l-613, an AT&T graduate  ... 
doi:10.1109/2.467577 fatcat:ghmtervcfzehzlelvf2ealwgyu

Author retrospective for PYRROS

Tao Yang, Apostolos Gerasoulis
2014 25th Anniversary International Conference on Supercomputing Anniversary Volume -  
Since the publication of the PYRROS project, there have been new advancements in the area of DAG scheduling algorithms, the use of DAG scheduling for irregular and large-scale computation, and software  ...  PYRROS scheduling goes through several processing stages including clustering of tasks, cluster mapping, and task execution ordering.  ...  Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. REFERENCES  ... 
doi:10.1145/2591635.2591647 dblp:conf/ics/YangG14 fatcat:35dgziip7rbrnacduqv2z7z3ji

Adaptive tuning in a dynamically changing resource environment

Seyong Lee, Rudolf Eigenmann
2008 Proceedings, International Parallel and Distributed Processing Symposium (IPDPS)  
This project is part of a larger effort that aims at creating a global information sharing system, where resources, such as software applications, computer platforms, and information can be shared, discovered  ...  (i) Our tuning system can efficiently select the best combination of compiler options, when translating programs to a target system.  ...  It also showed that it can navigate a large optimization parameter search space more efficiently and effectively than others [12, 9] .  ... 
doi:10.1109/ipdps.2008.4536399 dblp:conf/ipps/LeeE08 fatcat:b2q7mcn4ojaorn222zf3o4be2m

A dynamic scheduling method for irregular parallel programs

Steven Lucco
1992 SIGPLAN notices  
For irregular parallel operations, the compiled code gathers information about available parallelism and task execution time variance and uses this information to schedule the operation.  ...  We show a fundamental relationship between three quantities that characterize an irregular parallel computation: the total available parallelism, the optimal grain size, and the statistical variance of  ...  Introduction The aim of our research is to achieve efficient execution of parallel programs.  ... 
doi:10.1145/143103.143134 fatcat:xmgfwnr63zdijbbfjtl7yov6eu

An Inspector-Executor Algorithm for Irregular Assignment Parallelization [chapter]

Manuel Arenaz, Juan Touriño, Ramón Doallo
2004 Lecture Notes in Computer Science  
A loop with irregular assignment computations contains loopcarried output data dependences that can only be detected at run-time.  ...  The basic idea lies in splitting the iteration space of the sequential loop into sets of conflictfree iterations that can be executed concurrently on different processors.  ...  This work was supported by the Ministry of Science and Technology of Spain and FEDER funds under contract TIC2001-3694-C02-02.  ... 
doi:10.1007/978-3-540-30566-8_4 fatcat:uyokjzzmrrhsdhf22gojqqsrru

VFC: The Vienna Fortran Compiler

Siegfried Benkner
1999 Scientific Programming  
and delegating to the compiler the task of generating an explicitly parallel program.  ...  This comprises data locality assertions, non‐local access specifications and the possibility of reusing runtime‐generated communication schedules of irregular loops.  ...  and HPF+ kernels.  ... 
doi:10.1155/1999/304639 fatcat:rqcpi2ih4veynjqqxwxnrqg57a

Improving Locality for Adaptive Irregular Scientific Codes [chapter]

Hwansoo Han, Chau-Wen Tseng
2001 Lecture Notes in Computer Science  
Improved locality also enhances the effectiveness of LOCALWRITE, a parallelization technique for irregular reductions based on the owner computes rule.  ...  Experiments on irregular scientific codes for a variety of meshes show our partitioning algorithms are effective for static and adaptive codes on both sequential and parallel machines.  ...  George Karypis at the University of Minnesota for providing the application meshes in our experiments.  ... 
doi:10.1007/3-540-45574-4_12 fatcat:arh6n3y75netpai2q2bxmxqx4i

Efficient Run-Time Support for Irregular Block-Structured Applications

Stephen J. Fink, Scott B. Baden, Scott R. Kohn
1998 Journal of Parallel and Distributed Computing  
. • for_all loop iterations: each one executes independently on one SPMD process. • Storage model: distribute each block of data to its own logical address space, one space per processor. • Little compiler  ...  -Point: represents point in n-dim space. -Region: rectangular subset of Points.. -Grid: array of data indexed by Region. -XArray : array of Grids of different (irregular) shape.  ...  Conclusion • Structural abstractions hide some of the dirty work required for efficient communication within irregular block decompositions. • Despite KeLP being a high-level abstraction over MPI, performs  ... 
doi:10.1006/jpdc.1998.1437 fatcat:nghso34ejzaybaps6cnj7i3ocq
« Previous Showing results 1 — 15 out of 36,651 results