Filters








14,857 Hits in 4.8 sec

Automatic loop transformations and parallelization for Java

Pedro V. Artigas, Manish Gupta, Samuel P. Midkiff, José E. Moreira
2000 Proceedings of the 14th international conference on Supercomputing - ICS '00  
This transformation, combined with other techniques that we have developed, enables the compiler to perform high order loop transformations (for better data locality) and parallelization completely automatically  ...  Furthermore, the automatic parallelization achieves speedups of up to 3.8 on four processors.  ...  Loop Parallelization We rely on the automatic loop parallelization capabilities of TPO to parallelize Java code.  ... 
doi:10.1145/335231.335232 dblp:conf/ics/ArtigasGMM00 fatcat:pxsttcbmkrgnjgz7ijukipiks4

Automatic Loop Transformations and Parallelization for Java

P Artigas
2001 Parallel Processing Letters  
This transformation, combined with other techniques that we have developed, enables the compiler to perform high order loop transformations (for better data locality) and parallelization completely automatically  ...  Furthermore, the automatic parallelization achieves speedups of up to 3.8 on four processors.  ...  Loop Parallelization We rely on the automatic loop parallelization capabilities of TPO to parallelize Java code.  ... 
doi:10.1016/s0129-6264(00)00016-0 fatcat:rvxbu4qu2jfizg6e6u2qpwuoga

JPT: A Java Parallelization Tool [chapter]

Kristof Beyls, Erik D'Hollander, Yijun Yu
1999 Lecture Notes in Computer Science  
In this paper, JPT is introduced, a parallelization tool which generates PVM code from a serial Java program. JPT automatically detects parallel loops and generates master and slave P V M programs.  ...  With PVM for Java h o wever, the user still needs to partition the problem, calculate the data partitioning and program the message passing and synchronization.  ...  In this paper we focus on the automatic parallelization and e cient code generation of Java programs.  ... 
doi:10.1007/3-540-48158-3_22 fatcat:4rsau3vfebe2pnbt3sq5kd2hby

Speculative Execution of Parallel Programs with Precise Exception Semantics on GPUs [chapter]

Akihiro Hayashi, Max Grossman, Jisheng Zhao, Jun Shirako, Vivek Sarkar
2014 Lecture Notes in Computer Science  
Our approach includes (1) automatic generation of OpenCL kernels and JNI glue code from a Java-based parallel-loop construct (forall), (2) speculative execution of OpenCL kernels on GPUs, and (3) automatic  ...  generation of optimized and parallel exception-checking code for execution on the CPU.  ...  Automatic generation of optimized and parallel exception-checking code for execution on the multiple CPU cores. 3.  ... 
doi:10.1007/978-3-319-09967-5_20 fatcat:cayayngzmfbfhoghxzarhr6p4q

Vectorization for Java [chapter]

Jiutao Nie, Buqi Cheng, Shisheng Li, Ligang Wang, Xiao-Feng Li
2010 Lecture Notes in Computer Science  
The other approach is to use automatic vectorization to generate vector instructions for Java programs. It does not require programmers to modify the original source code.  ...  The first approach is to provide a Java vectorization interface (JVI) that developers can program with, to explicitly expose the programs' data parallelism.  ...  With loop-unrolling, loop level data parallelism can be transformed into superword level parallelism, so the SLP vectorization can also be used to exploit loop level data parallelism with the help of loop  ... 
doi:10.1007/978-3-642-15672-4_3 fatcat:dojzxob4j5g6lbhyfdlliojuoy

Rubus: A compiler for seamless and extensible parallelism

Muhammad Adnan, Faisal Aslam, Zubair Nawaz, Syed Mansoor Sarwar, Maciej Huk
2017 PLoS ONE  
It analyses and transforms a sequential program into a parallel program automatically, without any user intervention.  ...  For five different benchmarks, on average a speedup of 34.54 times has been achieved by Rubus as compared to Java on a basic GPU having only 96 cores.  ...  We would also like to thank Phil Pratt-Szeliga, Shigeru Chiba, and Peter Calvert for their insights and suggestions.  ... 
doi:10.1371/journal.pone.0188721 pmid:29211758 pmcid:PMC5718508 fatcat:m6ua2hjqozd65lztf3lizf5bxy

A Compiler Infrastructure for High-Performance Java⋆ [chapter]

Neil V. Brewster, Tarek S. Abdelrahman
2001 Lecture Notes in Computer Science  
This paper describes the zJava compiler infrastructure, a high-level framework for the analysis and transformation of Java programs.  ...  We include support for the sharing of information between compiler passes, and a framework for interprocedural analysis.  ...  Our current research involves the incorporation of the Omega dependence test in zJava for the parallelization of loops, and the implementation of path expression analysis for the parallelization of Java  ... 
doi:10.1007/3-540-48228-8_77 fatcat:sqtneog63nhjrddw4k6vw7bs5a

Strategies for the efficient exploitation of loop-level parallelism in Java

José Oliver, Jordi Guitart, Eduard Ayguadé, Nacho Navarro, Jordi Torres
2001 Concurrency and Computation  
This paper analyzes the overheads incurred in the exploitation of loop-level parallelism using Java Threads and proposes some code transformations that minimize them.  ...  The use of such transformations results in promising performance gains that may encourage the use of Java for exploiting loop-level parallelism in the framework of OpenMP.  ...  of parallel loops (this transformation makes use of the same class for all the loops).  ... 
doi:10.1002/cpe.573 fatcat:hfecj6oqrjdubnpznf4hdksk3u

Feature-Based Comparison of Language Transformation Tools

Muhammad Ilyas
2020 Lahore Garrison University research journal of computer science and information technology  
Source Language considered for this purpose is C sharp (C#) and the target language is Visual Basics (VB).  ...  These factors are Classes, pointers, Access Specifiers, Functions and Exceptions, etc. For this purpose, we have selected varyCode, Telerik, Multi-online converter, and InstantVB.  ...  Presented automatic source code transformation between octave and R fourth-generation languages TXL programming language was used for analysis and transformation.  ... 
doi:10.54692/lgurjcsit.2020.0404107 fatcat:lx3in2i6afecxgtz4l7gfp6c7a

Introducing concurrency in sequential Java via laws

Rafael Duarte, Alexandre Mota, Augusto Sampaio
2011 Information Processing Letters  
It is well known that one way of improving performance is by parallelization. In this paper we propose a parallelization strategy for Java using algebraic laws.  ...  We perform an experiment with two benchmarks and show that our strategy produces a gain similar to a specialized parallel version provided by the Java Grande Benchmark (JGB).  ...  Their approach is automatic, and relies on a modified virtual machine to perform the analyses and transformations devised by them.  ... 
doi:10.1016/j.ipl.2010.11.004 fatcat:fq4am5bb7vaedgfm4i2teqh3yu

NINJA: Java for High Performance Numerical Computing

José E. Moreira, Samuel P. Midkiff, Manish Gupta, Peng Wu, George Almasi, Pedro Artigas
2002 Scientific Programming  
Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers.  ...  Two compiler techniques,versioningandsemantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code.  ...  Furthermore, current Java platforms are incapable of automatically applying important optimizations for numerical code, such as loop transformations and automatic parallelization [20] .  ... 
doi:10.1155/2002/314103 fatcat:ewpurji2jfh5ljjwalxqvq352m

Automating Verification of Loops by Parallelization [chapter]

Tobias Gedell, Reiner Hähnle
2006 Lecture Notes in Computer Science  
It guarantees soundness of a proof rule that transforms a loop into a universally quantified update of the state change information represented by the loop body.  ...  We show that one can replace interactive proof techniques, such as induction, with automated first-order reasoning in order to deal with parallelizable loops, where a loop can be parallelized whenever  ...  Thanks are also due to Philipp Rümmer for many inspiring discussions.  ... 
doi:10.1007/11916277_23 fatcat:xbqr2wu5zvdfxoia4rmowf7ftm

Verification by Parallelization of Parametric Code [chapter]

Tobias Gedell, Reiner Hähnle
2007 Lecture Notes in Computer Science  
A loop can be parallelized, whenever the execution of a generic iteration of its body depends only on the step parameter and not on other iterations.  ...  This guarantees soundness of a proof rule that transforms a loop into a universally quantified update of the state change information effected by the loop body.  ...  Max Schröder did the final implementation [25] which is the basis for Section 7. Thanks are also due to Philipp Rümmer for many inspiring discussions.  ... 
doi:10.1007/978-3-540-75939-3_10 fatcat:r4w2xo5jezey7o4wwm7ghv6ke4

Towards a Compiler Framework for Thread-Level Speculation

Sergio Aldea, Diego R. Llanos, Arturo Gonzalez-Escribano
2011 2011 19th International Euromicro Conference on Parallel, Distributed and Network-Based Processing  
Speculative parallelization (SP), also called Thread-Level Speculation [1], [2], [3] or Optimistic Parallelization [4], [5] aims to automatically extract loop-and task-level parallelism when a compile-time  ...  To show the possibilities of this framework, we present an automatically-generated classification of loops for several SPEC CPU2006 C benchmarks.  ...  BACKGROUND Speculative parallelization (SP), also called Thread-Level Speculation [1] , [2] , [3] or Optimistic Parallelization [4] , [5] aims to automatically extract loop-and task-level parallelism  ... 
doi:10.1109/pdp.2011.14 dblp:conf/pdp/AldeaFG11 fatcat:e2aijq2zlfh3xdlvkyj3bctvom

Java with Auto-parallelization on Graphics Coprocessing Architecture

Guodong Han, Chenggang Zhang, King Tin Lam, Cho-Li Wang
2013 2013 42nd International Conference on Parallel Processing  
Japonica unveils an all-round system design unifying the programming style and language for transparent use of both CPU and GPU resources, automatically parallelizing all kinds of loops and scheduling  ...  Annotated loops will be automatically split into loop chunks (or tasks) being scheduled to execute on all available GPU/CPU cores.  ...  CONCLUSION In this paper, we design an automatic Java loop parallelization and task scheduling solution for GPU-based heterogeneous many-core architectures.  ... 
doi:10.1109/icpp.2013.62 dblp:conf/icpp/HanZLW13 fatcat:rw6ndtwjszddppqtkh5ipmks2i
« Previous Showing results 1 — 15 out of 14,857 results