Filters








5,887 Hits in 3.9 sec

How to Build a Benchmark

Jóakim v. Kistowski, Jeremy A. Arnold, Karl Huppler, Klaus-Dieter Lange, John L. Henning, Paul Cao
2015 Proceedings of the 6th ACM/SPEC International Conference on Performance Engineering - ICPE '15  
We focus on the characteristics important for a standardized benchmark, as created by the SPEC and TPC consortia.  ...  This paper introduces the primary concerns of benchmark development from the perspectives of SPEC and TPC committees.  ...  This definition is a variation of a definition provided in [12] with a focus on the competitive aspects of benchmarks, as that is the primary purpose of standardized benchmarks as developed by SPEC and  ... 
doi:10.1145/2668930.2688819 dblp:conf/wosp/KistowskiAHLHC15 fatcat:nay2riil7bakrov2cj4ay4uula

SPEC CPU2000: measuring CPU performance in the New Millennium

J.L. Henning
2000 Computer  
By license agreement, members agree to run and report results as specified by each benchmark suite. On June 30, 2000, SPEC retired the CPU95 benchmark suite.  ...  By continually evolving these benchmarks, SPEC aims to keep pace with the breakneck speed of technological innovation. But how does SPEC develop a benchmark suite and what do these benchmarks do?  ... 
doi:10.1109/2.869367 fatcat:mz3dfrj4kndd7cfcwlwwsix75u

The nofib Benchmark Suite of Haskell Programs [chapter]

Will Partain
1993 Functional Programming, Glasgow 1992  
This position paper describes the need for, make-up of, and \rules of the game" for a benchmark suite of Haskell programs. (It does not include results from running the suite.)  ...  My thanks to John Mashey for his many ne articles in comp.arch that promote sensible benchmarking, and to Je Reilly for providing information about SPEC.  ...  Vincent Delacour, Denis Howe, John O'Donnell, Paul Sanders, and Julian Seward were among those who provided helpful comment on earlier versions of this paper.  ... 
doi:10.1007/978-1-4471-3215-8_17 dblp:conf/fp/Partain92 fatcat:zeulyl3uhjfnjar654q5jmymhe

Benchmarking [chapter]

Reinhold Weicker
2002 Lecture Notes in Computer Science  
The SPEC Open Systems Group, TPC and SAP benchmarks are discussed in more detail. The use of benchmarks in academic research is discussed.  ...  Finally, some current issues in benchmarking are listed that users of benchmark results should be aware of.  ...  I want to thank my colleagues in the Fujitsu Siemens Benchmark Center in Paderborn for valuable suggestions, in particular Walter Nitsche, Ludger Meyer, and Stefan Gradek, whose "Performance Brief PRIMEPOWER  ... 
doi:10.1007/3-540-45798-4_9 fatcat:gzo2uloyhjee3kaw4xy455qp2a

A real-time benchmark for Java#8482;

Brian P. Doherty
2007 Proceedings of the 5th international workshop on Java technologies for real-time and embedded systems - JTRES '07  
SPEC and the benchmark name SPECjbb2005 are trademarks of the Standard Performance Evaluation Corporation. Results as of 12/05 on www.spec.org.  ...  For the latest SPECjbb2005 benchmark results, visit http://www.spec.org/osg/jbb2005 SPECjbb®2005rt 1 • Based on SPECjbb2005 > A well known Java server benchmark from SPEC > Active results submissions >  ...  Brand X Results Table Everyone  ... 
doi:10.1145/1288940.1288946 dblp:conf/jtres/Doherty07 fatcat:hgbabepzavbgnjysmk4rgzuq4u

Synthesis through Unification [article]

Rajeev Alur, Pavol Cerny, Arjun Radhakrishna
2015 arXiv   pre-print
We implemented these specializations in prototype tools, and we show that our tools often per- forms significantly better on standard benchmarks than a tool based on a pure CEGIS approach.  ...  If Prog and Prog' can be unified into a program in the program space, then a solution has been found.  ...  Note that the SyGuS competition benchmarks only go up to max 5 .  ... 
arXiv:1505.05868v1 fatcat:o3hscxbvijbttfrspzf7xflhpu

Not So Fast: Analyzing the Performance of WebAssembly vs. Native Code [article]

Abhinav Jangda, Bobby Powers, Emery Berger, Arjun Guha
2019 arXiv   pre-print
Across the SPEC CPU suite of benchmarks, we find a substantial performance gap: applications compiled to WebAssembly run slower by an average of 45% (Firefox) to 55% (Chrome), with peak slowdowns of 2.08x  ...  (Firefox) and 2.5x (Chrome).  ...  Finally, BROWSIX-SPEC kills the browser process and records the benchmark results.  ... 
arXiv:1901.09056v3 fatcat:aq7bowe3obflnmv3b5qttjttkm

Evaluating OpenMP on Chip MultiThreading Platforms [chapter]

Chunhua Liao, Zhenying Liu, Lei Huang, Barbara Chapman
2008 Lecture Notes in Computer Science  
OpenMP performance is studied using the EPCC Microbenchmark suite, subsets of the benchmarks in SPEC OMPM2001 and the NAS parallel benchmark 3.0 suites.  ...  OpenMP performance is studied using the EPCC Microbenchmark suite, subsets of the benchmarks in SPEC OMPM2001 and the NAS parallel benchmark 3.0 suites.  ...  Nawal Copty from Sun Microsystem Inc. helped us to run some benchmarks and to understand some results.  ... 
doi:10.1007/978-3-540-68555-5_15 fatcat:g6r6utog7ndchd6tjm2m4vinka

Performance Characterization of Modern Databases on Out-of-Order CPUs

Reena Panda, Christopher Erb, Michael LeBeane, Jee Ho Ryoo, Lizy Kurian John
2015 2015 27th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD)  
In this paper, we also compare data-serving applications with other popular benchmarks such as SPEC CPU2006 and SPECjbb2005.  ...  We also show that significant diversity exists among different database implementations and big-data benchmark designers can leverage our analysis to incorporate representative workloads to encapsulate  ...  SPEC-INT and SPEC-FP represent the average value for SPEC integer and floating-point benchmarks respectively. A.  ... 
doi:10.1109/sbac-pad.2015.31 dblp:conf/sbac-pad/PandaELRJ15 fatcat:62274pjgfvf25nztf5i6oeyhuq

The Second Rewrite Engines Competition

Francisco Durán, Manuel Roldán, Emilie Balland, Mark van den Brand, Steven Eker, Karl Trygve Kalleberg, Lennart C.L. Kats, Pierre-Etienne Moreau, Ruslan Schevchenko, Eelco Visser
2009 Electronical Notes in Theoretical Computer Science  
We explain here how the competition was organized and conducted, and present its main results and conclusions. of term rewriting engines.  ...  We will present in this paper its main results, some conclusions and future challenges. The first competition focused on efficiency, specifically speed, memory management and built-ins use.  ...  Acknowledgement We would like to thank Grigore Roşu, as organizer of WRLA 2008 and the First Rewrite Engines Competition, for his help and support. His experience and comments were very useful.  ... 
doi:10.1016/j.entcs.2009.05.025 fatcat:nfolvlqprjfbbcjin4wwlogzka

A Study of Implicit Data Distribution Methods for OpenMP Using the SPEC Benchmarks [chapter]

Dimitrios S. Nikolopoulos, Eduard Ayguadé
2001 Lecture Notes in Computer Science  
Our runtime memory management algorithms improve the speedup of five SPEC benchmarks by 20-25% on average.  ...  This paper evaluates the effectiveness of using this runtime data distribution method in non embarrassingly parallel codes, such as the SPEC benchmarks.  ...  Compared to the NAS benchmarks, the SPEC codes present us with a different picture. The native SPEC CPU2000 benchmarks are not parallelized.  ... 
doi:10.1007/3-540-44587-0_11 fatcat:u243gnubyfcqlce2c35fik42oe

HashCore: Proof-of-Work Functions for General Purpose Processors [article]

Yanni Georghiades, Steven Flolid, Sriram Vishwanath
2019 arXiv   pre-print
By modeling HashCore after such benchmarks, we create a Proof-of-Work function that can be run most efficiently on a GPP, resulting in a more accessible, competitive, and balanced mining market.  ...  We observe that GPP designers/developers essentially create an ASIC for benchmarks such as SPEC CPU 2017.  ...  Karl Kreder and Dr. Mohit Tiwari for the insightful discussions we had about the overarching goals and requirements of HashCore. We also thank Dr.  ... 
arXiv:1902.00112v2 fatcat:pitiinsxirbtlomlec2k4gnye4

Tejas Simulator : Validation against Hardware [article]

Smruti R. Sarangi, Rajshekar Kalayappan, Prathmesh Kallurkar, Seep Goel
2015 arXiv   pre-print
We report mean error rates of 11.45% and 18.77% for the SPEC2006 and Splash2 benchmark suites respectively.  ...  These error rates are competitive and in most cases better than the numbers reported by other contemporary simulators.  ...  Only 4 benchmarks have errors in the 20-30% range (sjeng, astar, mcf, and gcc). Figure 2 shows the results for a set of 11 benchmarks from the SPLASH-2 suite.  ... 
arXiv:1501.07420v1 fatcat:sgdkhoceujekhmhrlhovmuusvy

A64FX – Your Compiler You Must Decide! [article]

Jens Domke
2021 arXiv   pre-print
While the specifications of the chip and overall system architecture, and benchmarks submitted to various lists, like TOP500 and Green500, etc., are clearly highlighting the potential, the proliferation  ...  We test three state-of-the-art compiler suite against a broad set of benchmarks.  ...  software installation and application debugging.  ... 
arXiv:2107.07157v2 fatcat:lopyleh3lffl3jq5qm65va6wsu

Page 135 of IEEE Transactions on Computers Vol. 52, Issue 2 [page]

2003 IEEE Transactions on Computers  
Three SPEC benchmarks (mgrid, swim, and tomcatv) were run on the HP machine with 256 MB main memory. At the same time, several backup processes for swim were also executed on the HP workstation.  ...  The increased total execution time of the SPEC benchmarks shown in Table 4 indicates the performance lost due to the backup processes.  ... 
« Previous Showing results 1 — 15 out of 5,887 results