Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prof. Thomas Sterling Department of Computer Science

Similar presentations


Presentation on theme: "Prof. Thomas Sterling Department of Computer Science"— Presentation transcript:

1 High Performance Computing: Concepts, Methods & Means Performance I: Benchmarking
Prof. Thomas Sterling Department of Computer Science Louisiana State University January 23rd, 2007

2 Topics Definitions, properties and applications Early benchmarks
Everything you ever wanted to know about Linpack (but were afraid to ask) Other parallel benchmarks Organized benchmarking Presentation and interpretation of results Summary

3 Definitions, properties and applications
Early benchmarks Linpack Other parallel benchmarks Organized benchmarking Presentation and interpretation of results Summary

4 Basic Performance Metrics
Time related: Execution time [seconds] wall clock time system and user time Latency Response time Rate related: Rate of computation floating point operations per second [flops] integer operations per second [ops] Data transfer (I/O) rate [bytes/second] Effectiveness: Efficiency [%] Memory consumption [bytes] Productivity [utility/($*second)] Modifiers: Sustained Peak Theoretical peak

5 What Is a Benchmark? Benchmark: a standardized problem or test that serves as a basis for evaluation or comparison (as of computer system performance) [Merriam-Webster] The term “benchmark” also commonly applies to specially-designed programs used in benchmarking A benchmark should: be domain specific (the more general the benchmark, the less useful it is for anything in particular) be a distillation of the essential attributes of a workload avoid using single metric to express the overall performance Computational benchmark kinds synthetic: specially-created programs that impose the load on the specific component in the system application: derived from a real-world application program

6 Purpose of Benchmarking
To define the playing field To provide a tool enabling quantitative comparisons Acceleration of progress enable better engineering by defining measurable and repeatable objectives Establishing of performance agenda measure release-to-release or version-to-version progress set goals to meet be understandable and useful also to the people not having the expertise in the field (managers, etc.)

7 Properties of a Good Benchmark
Relevance: meaningful within the target domain Understandability Good metric(s): linear, orthogonal, monotonic Scalability: applicable to a broad spectrum of hardware/architecture Coverage: does not over-constrain the typical environment Acceptance: embraced by users and vendors Has to enable comparative evaluation Limited lifetime: there is a point when additional code modifications or optimizations become counterproductive Adapted from: Standard Benchmarks for Database Systems by Charles Levine, SIGMOD ‘97

8 Definitions, properties and applications
Early benchmarks Linpack Other parallel benchmarks Organized benchmarking Presentation and interpretation of results Summary

9 Early Benchmarks Whetstone Dhrystone Livermore Fortran Kernels
Floating point intensive Dhrystone Integer and character string oriented Livermore Fortran Kernels “Livermore Loops” Collection of short kernels NAS kernel 7 Fortran test kernels for aerospace computation The sources of the benchmarks listed above are available from:

10 Definitions, properties and applications
Early benchmarks Linpack Other parallel benchmarks Organized benchmarking Presentation and interpretation of results Summary

11 Linpack Overview Introduced by Jack Dongarra in 1979
Based on LINPACK linear algebra package developed by J. Dongarra, J. Bunch, C. Moler and P. Stewart (now superseded by the LAPACK library) Solves a dense, regular system of linear equations, using matrices initialized with pseudo-random numbers Provides an estimate of system’s effective floating-point performance Does not reflect the overall performance of the machine!

12 Linpack Benchmark Variants
Linpack Fortran (single processor) N=100 N=1000, TPP, best effort Linpack’s Highly Parallel Computing benchmark (HPL) Java Linpack

13 Linpack Fortran Performance on Different Platforms
Computer N=100 [MFlops] N=1000, TPP [MFlops] Theoretical Peak [MFlops] Intel Pentium Woodcrest (1core, 3 GHz) 3018 6542 12000 NEC SX-8/8 (8 proc., 2 GHz) - 75140 128000 NEC SX-8/8 (1 proc., 2 GHz) 2177 14960 16000 HP ProLiant BL20p G3 (4 cores, 3.8 GHz Intel Xeon) 8185 14800 HP ProLiant BL20p G3 (1 core 3.8 GHz Intel Xeon) 1852 4851 7400 IBM eServer p5-575 (8 POWER5 proc., 1.9 GHz) 34570 60800 IBM eServer p5-575 (1 POWER5 proc., 1.9 GHz) 1776 5872 7600 SGI Altix 3700 Bx2 (1 Itanium2 proc., 1.6 GHz) 1765 5953 6400 HP ProLiant BL45p (4 cores AMD Opteron 854, 2.8 GHz) 12860 22400 HP ProLiant BL45p (1 core AMD Opteron 854, 2.8 GHz) 1717 4191 5600 Fujitsu VPP5000/1 (1 proc., 3.33ns) 1156 8784 9600 Cray T932 (32 proc., 2.2ns) 1129 (1 proc.) 29360 57600 HP AlphaServer GS1280 7/1300 (8 Alpha proc., 1.3GHz) 14260 20800 HP AlphaServer GS1280 7/1300 (1 Alpha proc., 1.3GHz) 1122 2132 2600 HP 9000 rp (8 PA-8800 proc., 1000MHz) 14150 32000 HP 9000 rp (1 PA-8800 proc., 1000MHz) 843 2905 4000 Data excerpted from the LINPACK Benchmark Report at

14 Fortran Linpack Demo > ./linpack Please send the results of this run to: Jack J. Dongarra Computer Science Department University of Tennessee Knoxville, Tennessee Fax: Internet: This is version norm. resid resid machep x(1) x(n) E E E E E+00 times are reported for matrices of order 100 dgefa dgesl total mflops unit ratio b(1) times for array with leading dimension of 201 4.890E E E E E E E-15 4.860E E E E E E E+00 4.850E E E E E E E+00 4.856E E E E E E E+02 times for array with leading dimension of 200 4.210E E E E E E E+00 4.200E E E E E E E+00 4.200E E E E E E E+00 4.288E E E E E E E+02 end of tests -- this version dated 05/29/04 Total time (dgefa+dgesl) “Timing” unit (obsolete) First element of right hand side vector Time spent in solver (dgesl) Fraction of Cray-1S execution time (obsolete) Sustained floating point rate Time spent in matrix factorization routine (dgefa) Two different dimensions used to test the effect of array placement in memory Reference:

15 Linpack’s Highly Parallel Computing Benchmark (HPL)
Measures the performance of distributed memory machines Used in the “Linpack Benchmark Report” (Table 3) and to determine the order of machines on the Top500 list The portable version (written in C) External dependencies: MPI-1.1 functionality for inter-node communication BLAS or VSIPL library for simple vector operations such as scaled vector addition (DAXPY: y = αx+y) and inner dot product (DDOT: a = Σxiyi) Ground rules: allows a complete user replacement of the LU factorization and solver steps (the accuracy must satisfy given bound) same matrix as in the driver program no restrictions on problem size

16 HPL Linpack Metrics The HPL implementation of the benchmark is run for different problem sizes N on the entire machine For certain problem size Nmax, the cumulative performance in Mflops (reflecting 64-bit addition and multiplication operations) reaches its maximum value denoted as Rmax Another metric possible to obtain from the benchmark is N1/2, the problem size for which the half of the maximum performance (Rmax/2) is achieved The Rmax value is used to rank supercomputers in Top500 list; listed along with this number are the theoretical peak double precision floating point performance Rpeak of the machine and N1/2

17 Machine Parameters Influencing Linpack Performance
Linpack Fortran, N=100 Linpack Fortran, N=1000, TPP HPL Processor speed Yes Memory capacity No No (modern system) Yes (for Rmax) Network latency/bandwidth Compiler flags

18 Ten Fastest Supercomputers On Current Top500 List
# Computer Site Processors Rmax Rpeak 1 IBM Blue Gene/L DoE/NNSA/LLNL (USA) 131,072 280,600 367,000 2 Cray Red Storm Sandia (USA) 26,544 101,400 127,411 3 IBM BGW IBM T. Watson Research Center (USA) 40,960 91,290 114,688 4 IBM ASC Purple 12,208 75,760 92,781 5 IBM Mare Nostrum Barcelona Supercomputing Center (Spain) 10,240 62,630 94,208 6 Dell Thunderbird NNSA/Sandia (USA) 9,024 53,000 64,973 7 Bull Tera-10 Commissariat a l’Energie Atomique (France) 9,968 52,840 63,795 8 SGI Columbia NASA/Ames Research Center (USA) 10,160 51,870 60,960 9 NEC/Sun Tsubame GSIC Center, Tokyo Institute of Technology (Japan) 11,088 47,380 82,125 10 Cray Jaguar Oak Ridge National Laboratory (USA) 10,424 43,480 54,205 Source:

19 HPL Demo > mpirun -np 4 xhpl ============================================================================ HPLinpack 1.0a -- High-Performance Linpack benchmark January 20, 2004 Written by A. Petitet and R. Clint Whaley, Innovative Computing Labs., UTK An explanation of the input/output parameters follows: T/V : Wall time / encoded variant. N : The order of the coefficient matrix A. NB : The partitioning blocking factor. P : The number of process rows. Q : The number of process columns. Time : Time in seconds to solve the linear system. Gflops : Rate of execution for solving the linear system. The following parameter values will be used: N : NB : PMAP : Row-major process mapping P : Q : PFACT : Left NBMIN : NDIV : RFACT : Left BCAST : 1ringM DEPTH : SWAP : Mix (threshold = 64) L1 : transposed form U : transposed form EQUIL : yes ALIGN : 8 double precision words - The matrix A is randomly generated for each test. - The following scaled residual checks will be computed: 1) ||Ax-b||_oo / ( eps * ||A||_1 * N ) 2) ||Ax-b||_oo / ( eps * ||A||_1 * ||x||_1 ) 3) ||Ax-b||_oo / ( eps * ||A||_oo * ||x||_oo ) - The relative machine precision (eps) is taken to be e-16 - Computational tests pass if scaled residuals are less than ============================================================================ T/V N NB P Q Time Gflops WR01L2L e+01 ||Ax-b||_oo / ( eps * ||A||_1 * N ) = PASSED ||Ax-b||_oo / ( eps * ||A||_1 * ||x||_1 ) = PASSED ||Ax-b||_oo / ( eps * ||A||_oo * ||x||_oo ) = PASSED WR01L2L e+01 ||Ax-b||_oo / ( eps * ||A||_1 * N ) = PASSED ||Ax-b||_oo / ( eps * ||A||_1 * ||x||_1 ) = PASSED ||Ax-b||_oo / ( eps * ||A||_oo * ||x||_oo ) = PASSED WR01L2L e+01 ||Ax-b||_oo / ( eps * ||A||_1 * N ) = PASSED ||Ax-b||_oo / ( eps * ||A||_1 * ||x||_1 ) = PASSED ||Ax-b||_oo / ( eps * ||A||_oo * ||x||_oo ) = PASSED Finished tests with the following results: 3 tests completed and passed residual checks, 0 tests completed and failed residual checks, 0 tests skipped because of illegal input values. End of Tests. For configuration issues, consult:

20 Definitions, properties and applications
Early benchmarks Linpack Other parallel benchmarks Organized benchmarking Presentation and interpretation of results Summary

21 Other Parallel Benchmarks
High Performance Computing Challenge (HPCC) benchmarks Devised and sponsored to enrich the benchmarking parameter set NAS Parallel Benchmarks (NPB) Powerful set of metrics Reflects computational fluid dynamics NPBIO-MPI Stresses external I/O system

22 HPC Challenge Benchmark
Consists of 7 individual tests: HPL (Linpack TPP): floating point rate of execution of a solver of linear system of equations DGEMM: floating point rate of execution of double precision matrix-matrix multiplication STREAM: sustainable memory bandwidth (GB/s) and the corresponding computation rate for simple vector kernel PTRANS (parallel matrix transpose): total capacity of the network using pairwise communicating processes RandomAccess: the rate of integer random updates of memory (in GUPS: Giga-Updates Per Second) FFT: floating point rate of execution of double precision complex 1-D Discrete Fourier Transform b_eff (effective bandwidth benchmark): latency and bandwidth of a number of simultaneous communication patterns

23 Comparison of HPCC Results on Selected Supercomputers
Notes: all metrics shown are “higher-better”, except for the Random Ring Latency machine labels include: machine name (optional), manufacturer and system name, affiliation and (in parentheses) processor/network fabric type

24 NAS Parallel Benchmarks
Derived from computational fluid dynamics (CFD) applications Consist of five kernels and three pseudo-applications Exist in several flavors: NPB 1: original paper-and-pencil specification generally proprietary implementations by hardware vendors NPB 2: MPI-based sources distributed by NAS supplements NPB 1 can be run with little or no tuning NPB 3: implementations in OpenMP, HPF and Java derived from NPB-serial version with improved serial code a set of multi-zone benchmarks was added test implementation efficiency of multi-level and hybrid parallelization methods and tools (e.g. OpenMP with MPI) GridNPB 3: new suite of benchmarks, designed to rate the performance of computational grids includes only four benchmarks, derived from the original NPB written in Fortran and Java Globus as grid middleware

25 NPB 2 Overview Multiple problem classes (S, W, A, B, C, D)
Tests written mainly in Fortran (IS in C): BT (block tri-diagonal solver with 5x5 block size) CG (conjugate gradient approximation to compute the smallest eigenvalue of a sparse, symmetric positive definite matrix) EP (“embarrassingly parallel”; evaluates an integral by means of pseudorandom trials) FT (3-D PDE solver using Fast Fourier Transforms) IS (large integer sort; tests both integer computation speed and network performance) LU (a regular-sparse, 5x5 block lower and upper triangular system solver) MG (simplified multigrid kernel; tests both short and long distance data communication) SP (solves multiple independent system of non-diagonally dominant, scalar, pentadiagonal equations) Sources and reports available from:

26 Definitions, properties and applications
Early benchmarks Linpack Other parallel benchmarks Organized benchmarking Presentation and interpretation of results Summary

27 Benchmarking Organizations
SPEC Created to satisfy the need for realistic, fair and standardized performance tests Motto: “An ounce of honest data is worth more than a pound of marketing hype” TPC Formed primarily due to lack of reliable database benchmarks

28 Definitions, properties and applications
Early benchmarks Linpack Other parallel benchmarks Organized benchmarking Presentation and interpretation of results Summary

29 Presentation of the Results
Tables Graphs Bar graphs (a) Scatter plots (b) Line plots (c) Pie charts (d) Gantt charts (e) Kiviat graphs (f) Enhancements Error bars, boxes or confidence intervals Broken or offset scales (be careful!) Multiple curves per graph (but avoid overloading) Data labels, colors, etc. (a) (b) (c) (d) (e) (f)

30 Kiviat Graph Example Source:

31 Mixed Graph Example WRF OOCORE MILC PARATEC HOMME BSSN_PUGH Whisky_Carpet ADCIRC PETSc_FUN3D Computation fraction Floating point operations Communication fraction Load/store operations Other operations Characterization of NSF/CCT parallel applications on POWER5 architecture (using data collected by IPM)

32 Graph Do’s and Don’ts Good graphs: Poor graphs:
Require minimum effort from the reader Maximize information Maximize information-to-ink ratio Use commonly accepted practices Avoid ambiguity Poor graphs: Have too many alternatives on a single chart Display too many y-variables on a single chart Use vague symbols in place of text Show extraneous information Select scale ranges improperly Use line chart instead of a bar graph Reference: Raj Jain, The Art of Computer Systems Performance Analysis, Chapter 10

33 Common Mistakes in Benchmarking
From Chapter 9 of The Art of Computer Systems Performance Analysis by Raj Jain: Only average behavior represented in test workload Skewness of device demands ignored Loading level controlled inappropriately Caching effects ignored Buffering sizes not appropriate Inaccuracies due to sampling ignored Ignoring monitoring overhead Not validating measurements Not ensuring same initial conditions Not measuring transient performance Using device utilizations for performance comparisons Collecting too much data but doing very little analysis

34 Misrepresentation of Performance Results on Parallel Computers
Quote only 32-bit performance results, not 64-bit results Present performance for an inner kernel, representing it as the performance of the entire application Quietly employ assembly code and other low-level constructs Scale problem size with the number of processors, but omit any mention of this fact Quote performance results projected to the full system Compare your results with scalar, unoptimized code run on another platform When direct run time comparisons are required, compare with an old code on an obsolete system If MFLOPS rates must be quoted, base the operation count on the parallel implementation, not on the best sequential implementation Quote performance in terms of processor utilization, parallel speedups or MFLOPS per dollar Mutilate the algorithm used in the parallel implementation to match the architecture Measure parallel run times on a dedicated system, but measure conventional run times in a busy environment If all else fails, show pretty pictures and animated videos, and don't talk about performance Reference: David Bailey “Twelve Ways to Fool the Masses When Giving Performance Results on Parallel Computers”, Supercomputing Review, Aug 1991, pp.54-55,

35 Definitions, properties and applications
Early benchmarks Linpack Other parallel benchmarks Organized benchmarking Presentation and interpretation of results Summary

36 Knowledge Factors & Skills
benchmarking and metrics performance factors Top500 list Skill set: determine state of system resources and manipulate them acquire, run and measure benchmark performance launch user application codes

37 Material For Test Basic performance metrics (slide 4)
Definition of benchmark in own words; purpose of benchmarking; properties of good benchmark (slides 5, 6, 7) Linpack: what it is, what does it measure, concepts and complexities (slides 15, 17, 18) HPL: (slides 21 and 24) Linpack compare and contrast (slide 25) General knowledge about HPCC and NPB suites (slides 31 and 34) Benchmark result interpretation (slides 49, 50)


Download ppt "Prof. Thomas Sterling Department of Computer Science"

Similar presentations


Ads by Google