Filters








54 Hits in 3.4 sec

Idiom recognition in the Polaris parallelizing compiler

Bill Pottenger, Rudolf Eigenmann
1995 Proceedings of the 9th international conference on Supercomputing - ICS '95  
As part of the Polaris project 5], compiler passes that recognize these idioms have been implemented a n d e v aluated.  ...  The elimination of induction variables and the parallelization of reductions in FORTRAN programs have been shown to be integral to performance improvement on parallel computers 7, 8].  ...  variable ranges, and the integration of the idiom recognition pass with other compilation techniques.  ... 
doi:10.1145/224538.224655 dblp:conf/ics/PottengerE95 fatcat:2lzu7ubinbbwjfj7mfkzgzwd3e

Are parallel workstations the right target for parallelizing compilers? [chapter]

Rudolf Eigenmann, Insung Park, Michael J. Voss
1997 Lecture Notes in Computer Science  
To this end, we h a ve retargeted the Polaris parallelizing compiler at a 4-processor Sun SPARCstation 20 and measured the performance of parallel programs.  ...  In this paper we study the success and limitations of such an approach.  ...  In addition to many commonlyknown passes, Polaris includes advanced capabilities for array privatization, symbolic and nonlinear data dependence testing, idiom recognition, interprocedural analysis, and  ... 
doi:10.1007/bfb0017260 fatcat:t2s32h76ajehxfogzx75kja5ya

Automatic Detection of Parallelism: A grand challenge for high performance computing

W. Blume, R. Eigenmann, J. Hoeflinger, D. Padua, P. Petersen, L. Rauchwerger, Perg Tu
1994 IEEE Parallel & Distributed Technology Systems & Applications  
The limited ability of compilers to nd the parallelism in programs is a signi cant barrier to the use of high performance computers.  ...  We show evidence that compilers can be improved, through static and run-time techniques, to the extent that a signi cant group of scienti c programs may be parallelized automatically.  ...  We will not discuss idiom recognition further in this paper. Some issues regarding this topic are discussed in EHJ + 93 .  ... 
doi:10.1109/m-pdt.1994.329796 fatcat:q4de46zaljaglertt6kjkofuwi

Compiler-based tools for analyzing parallel programs

Brian Armstrong, Seon Wook Kim, Insung Park, Michael Voss, Rudolf Eigenmann
1998 Parallel Computing  
In this paper, we present several tools for analyzing parallel programs.  ...  We will present case studies demonstrating the tool use. These include the characterization of an industrial application and the study of new compiler techniques and portable parallel languages.  ...  Army or the Government. capabilities for array privatization, symbolic and nonlinear data dependence testing, idiom recognition, inter-procedural analysis, and symbolic program analysis.  ... 
doi:10.1016/s0167-8191(98)00019-2 fatcat:bmlnszdwxnhm3fuydzzf6ycjje

Automatic Recognition of Performance Idioms in Scientific Applications

Jiahua He, Allan E. Snavely, Rob F. Van der Wijngaart, Michael A. Frumkin
2011 2011 IEEE International Parallel & Distributed Processing Symposium  
To check these hypotheses, we proposed an automatic idioms recognition method and implemented the method, based on the open source compiler Open64.  ...  With the NAS Parallel Benchmark (NPB) as a case study, the prototype system is about 90% accurate compared with idiom classification by a human expert.  ...  ACKNOWLEDGEMENTS We thank the Intel R Internship Program and University Cooperation Program for supporting this work, Dr. Lars Jonsson for supervising the project.  ... 
doi:10.1109/ipdps.2011.21 dblp:conf/ipps/HeSWF11 fatcat:c2gnizhrdjggbh4xapf6krrdla

Interactive compilation and performance analysis with URSA MINOR [chapter]

Insung Park, Michael Voss, Brian Armstrong, Rudolf Eigenmann
1998 Lecture Notes in Computer Science  
Ursa Minor is built using the Polaris compiler infrastructure.  ...  The tools are currently being used in several projects to develop and study parallel applications and to evaluate parallelizing compilers.  ...  Ursa Minor is evolving in a need-driven way. Its developers are involved in projects such as the characterization and analysis of real applications and the development of parallelizing compilers.  ... 
doi:10.1007/bfb0032690 fatcat:eni2kfwb45buhj2akxq4halmsq

Induction Variable Analysis without Idiom Recognition: Beyond Monotonicity [chapter]

Peng Wu, Albert Cohen, David Padua
2003 Lecture Notes in Computer Science  
With the same computational complexity, the new algorithm improves the monotonic evolution-based analysis in two aspects: more accurate dependence testing and the ability to compute closed form expressions  ...  This property captures the value changes of a variable along a given controlflow path of a program.  ...  Related Work Most induction variable analyses focus on idiom recognition and closed form computation.  ... 
doi:10.1007/3-540-35767-x_28 fatcat:hinuvg556fgpzdypl6uiyic5xa

The Structure of a Compiler for Explicit and Implicit Parallelism [chapter]

Seon Wook Kim, Rudolf Eigenmann
2003 Lecture Notes in Computer Science  
Our compilation system integrates the Polaris preprocessor with the Gnu C code generating compiler. We describe the major components that are involved in generating explicit and implicit threads.  ...  We describe in more detail two components that represent significant open issues. The first issue is the integration of the parallelizing preprocessor with the code generator.  ...  Polaris is a parallelizing preprocessor that contains advanced techniques, such for data-dependence analysis, array privatization, idiom recognition, and symbolic analysis [2] .  ... 
doi:10.1007/3-540-35767-x_22 fatcat:5ibukxacijdx3kbicul5hsfnvy

PIR: PMaC's Idiom Recognizer

Catherine Olschanowsky, Allan Snavely, Mitesh R. Meswani, Laura Carrington
2010 2010 39th International Conference on Parallel Processing Workshops  
This paper describes the PIR implementation and defines a subset of idioms commonly found in HPC applications.  ...  PIR, PMaC's Static Idiom Recognizer, automates the pattern recognition process. PIR recognizes specified patterns and tags the source code where they appear using static analysis.  ...  The software used in this work was in part developed by the DOE-supported ASC / Alliance Center for Astrophysical Thermonuclear Flashes at the University of Chicago.  ... 
doi:10.1109/icppw.2010.36 dblp:conf/icppw/OlschanowskySMC10 fatcat:552nlyfvsne4zegqbntbfj6jze

Cetus – An Extensible Compiler Infrastructure for Source-to-Source Transformation [chapter]

Sang-Ik Lee, Troy A. Johnson, Rudolf Eigenmann
2004 Lecture Notes in Computer Science  
We created Cetus out of the need for a compiler research environment that facilitates the development of interprocedural analysis and parallelization techniques for C, C++, and Java programs.  ...  We will then compare these results with those of the Polaris Fortran translator.  ...  Conclusion We have presented an extensible compiler infrastructure, named Cetus, that has proved useful in dealing with C programs.  ... 
doi:10.1007/978-3-540-24644-2_35 fatcat:pn4r262oqfcb7ksolnjhermsme

Parallel Programming Environment for OpenMP

Insung Park, Michael J. Voss, Seon Wook Kim, Rudolf Eigenmann
2001 Scientific Programming  
The presented evaluation demonstrates that our environment offers significant support in general parallel tuning efforts and that the toolset facilitates many common tasks in OpenMP parallel programming  ...  Our toolset provides automated and interactive assistance to parallel programmers in time-consuming tasks of the proposed methodology.  ...  In this study, a programmer has tried to improve the performance of the program beyond that achieved by the Polaris parallelizing compiler.  ... 
doi:10.1155/2001/195437 fatcat:dfdcmc5j6bevvgzyq5kjce75m4

Automatic subject indexing using an associative neural network

Yi-Ming Chung, William M. Pottenger, Bruce R. Schatz
1998 Proceedings of the third ACM conference on Digital libraries - DL '98  
The global growth in popularity of the World Wide Web has been enabled in part by the availability of browser based search tools which in turn have led to an increased demand for indexing techniques and  ...  As the amount of globally accessible information in community repositories grows, it is no longer cost-effective for such repositories to be indexed by professional indexers who have been trained to be  ...  We would also like to thank Conrad Chang, Kevin Powell, Qin He, Nuala Bennett, Dan Pape, Margarita Ham, Ben Gross and Baba Buehler, and other members of the Interspace team both here and at the University  ... 
doi:10.1145/276675.276682 dblp:conf/dl/ChungPS98 fatcat:inszgby2grfq5j34vaalesf6f4

Ursa Major: Exploring Web technology for design and evaluation of high-performance systems [chapter]

Insung Park, Rudolf Eigenmann
1998 Lecture Notes in Computer Science  
experiences in using such a t o o l a n d its underlying technology for the design and evaluation of parallel machines and applications.  ...  Our discussion is based on a concrete tool, called Ursa Major, which addresses two speci c problems with the design of parallel systems.  ...  Polaris, as a compiler, includes advanced program analysis and transformation techniques for array privatization, symbolic and nonlinear data dependence testing, idiom recognition, interprocedural analysis  ... 
doi:10.1007/bfb0037181 fatcat:3dcqp6sdwneobnulkhd2rklism

Semantic-Aware Automatic Parallelization of Modern Applications Using High-Level Abstractions

Chunhua Liao, Daniel J. Quinlan, Jeremiah J. Willcock, Thomas Panas
2010 International journal of parallel programming  
In this paper, we use a source-to-source compiler infrastructure, ROSE, to explore compiler techniques to recognize high-level abstractions and to exploit their semantics for automatic parallelization.  ...  Several representative parallelization candidate kernels are used to study semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses.  ...  Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided  ... 
doi:10.1007/s10766-010-0139-0 fatcat:ru2s63ic3zfj7hozhbyw7mieim

Extending Automatic Parallelization to Optimize High-Level Abstractions for Multicore [chapter]

Chunhua Liao, Daniel J. Quinlan, Jeremiah J. Willcock, Thomas Panas
2009 Lecture Notes in Computer Science  
In this paper, we automatically parallelize C++ applications using ROSE, a multiple-language source-to-source compiler infrastructure which preserves the high-level abstractions and allows us to unambiguously  ...  Several representative parallelization candidate kernels are used to explore semantic-aware parallelization strategies for high-level abstractions, combined with extended compiler analyses.  ...  The Polaris compiler [2] is mainly used for improving loop-level automatic parallelization.  ... 
doi:10.1007/978-3-642-02303-3_3 fatcat:tddm42kq7rhbjhze2i7kpj6wfi
« Previous Showing results 1 — 15 out of 54 results