A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Filters
High-performance parallel graph reduction
[chapter]
1989
Lecture Notes in Computer Science
This paper outlines some of the issues raised by parallel compiled graph reduction, and presents the approach we have adopted for our parallel machine, GRIP. ...
Parallel graph reduction is an attractive implementation for functional programming languages because of its simplicity and inherently distributed nature. ...
Compiled graph reduction Our first model for parallel reduction was an interpretive one called the four-stroke reduction engineCtac86a, and we now have a running parallel implementation of this machine ...
doi:10.1007/3540512845_40
fatcat:rhom626ekvap7kq3dnjqmapfsi
Implementation of speculative parallelism in functional languages
1994
IEEE Transactions on Parallel and Distributed Systems
This provides a basis for identifying useful speculative parallelism in a program. ...
The performance of speculative evaluation is compared with that of lazy evaluation, and the necessary conditions under which speculative evaluation performs better are identified. ...
Sastry for their many helpful discussions. ...
doi:10.1109/71.329669
fatcat:r4gv6u6oifezbddnnwpadgnlqu
Implementation of Sensitivity Analysis for Automatic Parallelization
[chapter]
2008
Lecture Notes in Computer Science
., privatization or pushback parallelization, and static and dynamic evaluation of complex conditions for loop parallelization. ...
We concern ourselves with multi-version and parallel code generation as well as the use of speculative parallelization when other, less costly options fail. ...
These predicates are returned to the programmer for evaluation (for interactive compilation). ...
doi:10.1007/978-3-540-89740-8_22
fatcat:66kx5qzjcbbexkeyd4ztkgqml4
The four-stroke reduction engine
1986
Proceedings of the 1986 ACM conference on LISP and functional programming - LFP '86
This paper presents an algorithm for the parallel graph reduction of a functional program. ...
Functional languages offer a powerful lever on the programming of parallel machines, and the most promising model for implementing these languages is graph reduction. ...
A model for parallel graph reduction Our proposal for parallel graph reduction has the following features: ( i ) The reduction of the graph is performed by the concurrent execution of many tasks, each ...
doi:10.1145/319838.319865
fatcat:75lx6wrkjbaylbyzqau7v7mfsu
Exploitation of nested thread-level speculative parallelism on multi-core systems
2010
Proceedings of the 7th ACM international conference on Computing frontiers - CF '10
Nested thread-level speculative parallelization has been proposed as a means to exploit the hardware parallelism of such systems. ...
In this paper, we present a methodology to gauge the efficacy of nested thread-level speculation with increasing level of nesting. ...
BACKGROUND In this section, we present an overview of the speculative execution model and the basics of conditional probability. 1 The need for exploiting nested speculative thread-level parallelism ...
doi:10.1145/1787275.1787302
dblp:conf/cf/KejariwalGTSNVBP10
fatcat:urckqjih7veqnjqn7cvvydsuvq
Why Parallel Functional Programming Matters: Panel Statement
[chapter]
2011
Lecture Notes in Computer Science
The evaluate-and-die technique improves on this by taking advantage of graph reduction. If a sparked node is needed (not speculative), then it must be attached to the parent computation. ...
The novel graph reducer GRIP (Graph Reduction in Parallel) 102, 101] is built from a network of distributed conventional processors. ...
doi:10.1007/978-3-642-21338-0_17
fatcat:n3tvygfvdbcrpmmbrnxn7xshcq
Discovery and exploitation of general reductions: A constraint based approach
2017
2017 IEEE/ACM International Symposium on Code Generation and Optimization (CGO)
Once discovered, we automatically generate parallel code to exploit the reduction. This approach is robust and was evaluated on C versions of well known benchmark suites: NAS, Parboil and Rodinia. ...
Discovering and exploiting scalar reductions in programs has been studied for many years. The discovery of more complex reduction operations has, however, received less attention. ...
Candidates for speculative parallelization are determined by searching for update-chains in the data flow graph. ...
doi:10.1109/cgo.2017.7863746
fatcat:6f2ahogcc5bgtili2xiggy2w3a
Exploiting coarse-grain speculative parallelism
2011
SIGPLAN notices
Speculative execution at coarse granularities (e.g., codeblocks, methods, algorithms) offers a promising programming model for exploiting parallelism on modern architectures. ...
Anumita can be used to improve the performance of hard to parallelize algorithms whose performance is highly dependent on input data. ...
During reduction, conflicting subgraphs are recolored. Despite such efforts, the challenge still persists to develop efficient parallel algorithms for vertex coloring. ...
doi:10.1145/2076021.2048110
fatcat:rsyjze662nh4pf62mi26rirdwu
Exploiting coarse-grain speculative parallelism
2011
Proceedings of the 2011 ACM international conference on Object oriented programming systems languages and applications - OOPSLA '11
Speculative execution at coarse granularities (e.g., codeblocks, methods, algorithms) offers a promising programming model for exploiting parallelism on modern architectures. ...
Anumita can be used to improve the performance of hard to parallelize algorithms whose performance is highly dependent on input data. ...
During reduction, conflicting subgraphs are recolored. Despite such efforts, the challenge still persists to develop efficient parallel algorithms for vertex coloring. ...
doi:10.1145/2048066.2048110
dblp:conf/oopsla/PylaRV11
fatcat:tgatnmdn3zekhmo4say74qibua
Exclusive squashing for thread-level speculation
2011
Proceedings of the 20th international symposium on High performance distributed computing - HPDC '11
Speculative parallelization is a runtime technique that optimistically executes sequential code in parallel, checking that no dependence violations appear. ...
Results show a reduction of 38.5% to 81.8% in the number of restarted threads for real application loops and up to a 10% speedup, depending on the amount of local computation. ...
Belén Palop for many helpful discussions on this topic. ...
doi:10.1145/1996130.1996172
dblp:conf/hpdc/Garcia-YaguezFG11
fatcat:tq66jmp3afdh3jwjkr2xzwimeq
For example GUM is available by FTP for a Sun SPARCserver multiprocessor and for a networks of Sun SPARC workstations. ...
GUM is a portable, parallel implementation of the Haskell functional language which has been publicly released with version 0.26 of the Glasgow Haskell Compiler (GHC). ...
We also expect to need to tune our system, especially for shared-memory systems, and perhaps introduce new parallel hints that can be exploited by some classes of architecture. ...
doi:10.1145/231379.231392
dblp:conf/pldi/TrinderHMPJ96
fatcat:5gnb2fozw5g6bbdum762372ece
GUM
1996
SIGPLAN notices
For example GUM is available by FTP for a Sun SPARCserver multiprocessor and for a networks of Sun SPARC workstations. ...
GUM is a portable, parallel implementation of the Haskell functional language which has been publicly released with version 0.26 of the Glasgow Haskell Compiler (GHC). ...
We also expect to need to tune our system, especially for shared-memory systems, and perhaps introduce new parallel hints that can be exploited by some classes of architecture. ...
doi:10.1145/249069.231392
fatcat:2mdtccygy5gf3oirv3hhuk6vxa
A provably time-efficient parallel implementation of full speculation
1996
Proceedings of the 23rd ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '96
Speculative evaluation, including leniency and futures, is often used to produce high degrees of parallelism. ...
For this purpose we consider a simple language based on speculative evaluation. ...
INTRODUCTION Futures, lenient languages, and several implementations of graph reduction for lazy languages all use speculative evaluation (call-by-speculation [Hudak and Anderson 1987] ) to expose parallelism ...
doi:10.1145/237721.237797
dblp:conf/popl/GreinerB96
fatcat:ygckgzim6jh75cfwgw743jnz6i
A provably time-efficient parallel implementation of full speculation
1999
ACM Transactions on Programming Languages and Systems
Speculative evaluation, including leniency and futures, is often used to produce high degrees of parallelism. ...
For this purpose we consider a simple language based on speculative evaluation. ...
INTRODUCTION Futures, lenient languages, and several implementations of graph reduction for lazy languages all use speculative evaluation (call-by-speculation [Hudak and Anderson 1987] ) to expose parallelism ...
doi:10.1145/316686.316690
fatcat:mtv7u3o7wnbz5mvkg5rm5ip62y
DA Based Systematic Approach Using Speculative Addition for High Speed DSP Applications
2018
International Journal of Engineering & Technology
The proposed speculative adder based on Han-Carlson parallel-prefix topology attains better latency reduction than variable latency Kogge-Stone topology. ...
In recent years Parallel-prefix topologies has been emerged to offer a high-speed solution for many DSP applications. ...
This paper, for the first time, we present speculation with variable latency to link equations via parallel-prefix computation. ...
doi:10.14419/ijet.v7i2.24.12030
fatcat:ft4s32mfbjc4fl72jheayhlke4
« Previous
Showing results 1 — 15 out of 42,122 results