Download presentation
Presentation is loading. Please wait.
Published byMelanie Perkins Modified over 11 years ago
2
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop this is important for machine procurements and for understanding where HPC technology is heading HPC Benchmarking and Performance Evaluation With Realistic Applications Brian Armstrong, Hansang Bae, Rudolf Eigenmann, Faisal Saied, Mohamed, Sayeed, Yili Zheng Purdue University Benchmarking has two important goals 1. Assess the performance of High-Performance Computer Platforms 2. Measure and show opportunities for progress in HPC this is important to quantify and compare scientific research contributions and to set new directions for research
3
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop Why Talk About Benchmarking? There is no progress if you cant measure it use benchmark applications unknown to others; give no reference use applications that have the same name as known benchmarks, but that show better performance of your innovation use only those benchmarks out of a suite that show good performance on your novel technique use only the benchmarks out of the suite that dont break your technique modify the benchmark source code change data set parameters use the debug data set use a few loops out of the full programs only measure loop performance but label it as the full application dont mention in your paper why you have chosen the benchmarks in this way and what changes you have made time the interesting part of the program only; exclude overheads measure interesting overheads only, exclude large unwanted items 12 ways to fool the scientist (with computer performance evaluation)
4
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop Benchmarks Need to be Representative and Open Representative Benchmarks: –Represent real problems Open Benchmarks: –No proprietary strings attached –Source code and performance data can be freely distributed With these goals in mind, SPECs High-performance group was formed in 1994
5
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop Why is Benchmarking with Real Application Hard? Simple Benchmarks are Overly Easy to Run Realistic Benchmarks Cannot be Abstracted from Real Applications Today's Realistic Applications May Not be Tomorrows Applications Benchmarking is not Eligible for Research Funding Maintaining Benchmarking Efforts is Costly Proprietary Full-Application Benchmarks Cannot Serve as Yardsticks
6
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop SPEC HPC2002 Includes three (suites of) codes –SPECchem, used in chemical and pharmaceutical industries (gamess) 110,000 lines of Fortran and C –SPECenv, weather forecast application (WRF) 160,000 lines of Fortran and C –SPECseis, used in the search for oil and gas 20,000 lines of Fortran and C All codes include several data sets and are available in a serial and a parallel variant ( MPI, OpenMP, hybrid execution is possible ). SPEC HPC used in TAP list - Top Application Performers –the rank list of HPC systems based on realistic applications –www.purdue.edu/TAPlistwww.purdue.edu/TAPlist Emphasis on most realistic applications, no programming model favored
7
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop Ranklist of Supercomputers based on Realistic Applications (SPEC HPC, medium data set)
8
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop Can We Learn the Same from Kerrnel Benchmarks?
9
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop MPI Communication (percent of overall runtime)
10
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop I/O Behavior Disk Read/Write Times and Volumes Only SPECseis has parallel I/O. SPECenv and SPECchem perform I/O on a single processor. HPL has no I/O
11
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop I/O Volume, Time, and Effective Bandwidth
12
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop Memory Footprints
13
HPC Benchmarking, Rudi Eigenmann, 2006 SPEC Benchmark Workshop Conclusions There is a dire need for basing performance evaluation and benchmarking results on realistic applications. The SPEC HPC the main criteria for real-application benchmarking: relevance and openness. Kernel benchmarks are the best choices for measuring individual system components. However, there is a large range of questions that can only be answered satisfactorily using real-application benchmarks. Benchmarking with real applications is hard and there are many challenges, but there is no replacement.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.