D6.3.1: Report on available performance analysis and benchmark tools, representative Benchmark

Peter Michielse, Jon Hill, Guillaume Houzeaux, Olli-Pekka Lehto, Walter Lioen
2008 Zenodo  
This document reports on the construction of a benchmark suite, to be used both within the current PRACE project and beyond, when actual Tier-0 systems will be purchased. Apart from the benchmark suite, this document also reports on currently available performance analysis tools and synthetic benchmarks, as these are essential tools for monitoring scalability and optimisation of benchmark codes, and for analysing and comparing the basic components of HPC systems. This document takes its input
more » ... om various sources. First, there is the list of applications and their requirements, as delivered by tasks 6.1 and 6.2. As these applications belong to the most frequently used on current European HPC platforms, they should form the basis of a PRACE benchmark suite. Secondly, there is the hardware architecture survey, as conducted by WP7 and its consequences for prototype systems to be used within PRACE. As these prototype architectures are considered as important, it makes sense to use these as platforms for benchmark preparations on scalability (to be handled by task 6.4) and optimisation (task 6.5). A third aspect is the available combinations of expertise on applications and expertise on architecture, for which it makes sense to be used as appropriate and efficient as possible. PRACE targets towards a European research Infrastructure, ideally consisting of various hardware architectures. This implicitly means that some applications are more suited to certain architectures than others. This needs to be reflected in the final benchmark suite, with the idea that potentially subsets of the overall benchmark suite may be used for benchmarking different architectures. These aspects together lead to the output as described in this document, which consists of an initial benchmark suite, with applications ported to target architectures, including recommendations on further work and effort estimates for petascaling (task 6.4) and optimisation (task 6.5). Integration of the benchmark codes into a benchmark suite is an [...]
doi:10.5281/zenodo.6546108 fatcat:jd6izz6ykrawlpuiwo6kjgljqa