Presentation is loading. Please wait.

Presentation is loading. Please wait.

H. Simon - CS267 - L8 2/9/20161 CS 267 Applications of Parallel Processors Lecture 9: Computational Electromagnetics - Large Dense Linear Systems 2/19/97.

Similar presentations


Presentation on theme: "H. Simon - CS267 - L8 2/9/20161 CS 267 Applications of Parallel Processors Lecture 9: Computational Electromagnetics - Large Dense Linear Systems 2/19/97."— Presentation transcript:

1 H. Simon - CS267 - L8 2/9/20161 CS 267 Applications of Parallel Processors Lecture 9: Computational Electromagnetics - Large Dense Linear Systems 2/19/97 Horst D. Simon http://www.cs.berkeley.edu/cs267

2 H. Simon - CS267 - L8 2/9/20162 Outline - Lecture 9 - Computational Electromagnetics - Sources of large dense linear systems - Review of solution of linear systems with Gaussian elimination - BLAS and memory hierarchy for linear algebra kernels

3 H. Simon - CS267 - L8 2/9/20163 Outline - Lecture 10 - Layout of matrices on distributed memory machines - Distributed Gaussian elimination - Speeding up with advanced algorithms - LINPACK and LAPACK - LINPACK benchmark - Tflops result

4 H. Simon - CS267 - L8 2/9/20164 Outline - Lecture 11 - Designing portable libraries for parallel machines - BLACS - ScaLAPACK for dense linear systems - other linear algebra algorithms in ScaLAPACK

5 H. Simon - CS267 - L8 2/9/20165 Computational Electromagnetics - developed during 1980s, driven by defense applications - determine the RCS (radar cross section) of airplane - reduce signature of plane (stealth technology) - other applications are antenna design, medical equipment - two fundamental numerical approaches: MOM methods of moments ( frequency domain), and finite differences (time domain)

6 H. Simon - CS267 - L8 2/9/20166 Computational Electromagnetics image: NW Univ. Comp. Electromagnetics Laboratory http://nueml.ece.nwu.edu/ - discretize surface into triangular facets using standard modeling tools - amplitude of currents on surface are unknowns - integral equation is discretized into a set of linear equations

7 H. Simon - CS267 - L8 2/9/20167 Computational Electromagnetics (MOM) After discretization the integral equation has the form Z J = V where Z is the impedance matrix, J is the unknown vector of amplitudes, and V is the excitation vector. Z is given as a four dimensional integral. (see Cwik, Patterson, and Scott, Electromagnetic Scattering on the Intel Touchstone Delta, IEEE Supercomputing ‘92, pp 538 - 542)

8 H. Simon - CS267 - L8 2/9/20168 The main steps in the solution process are A) computing the matrix elements B) factoring the dense matrix C) solving for one or more excitations D) computing the fields scattered from the object Computational Electromagnetics (MOM)

9 H. Simon - CS267 - L8 2/9/20169 Analysis of MOM for Parallel Implementation Task Work Parallelism Parallel Speed Fill O(n**2) embarrassing low Factor O(n**3) moderately diff. very high Solve O(n**2) moderately diff. high Field Calc. O(n) embarrassing high For most scientific applications the biggest gain in performance can be obtained by focusing on one tasks.

10 H. Simon - CS267 - L8 2/9/201610 Results for Parallel Implementation on Delta Task Time (hours) Performance (Gflop/s) Fill 9.20 ~ 1.0 Factor 8.25 10.35 Solve 2.17 - Field Calc. 0.12 3.0 The problem solved was for a matrix of size 48,672. (The world record in 1991.)

11 H. Simon - CS267 - L8 2/9/201611 Current Records for Solving Dense Systems Year System Size Machine 1950's O(100) 1991 55,296 CM-2 1992 75,264 Intel 1993 75,264 Intel 1994 76,800 CM-5 1995 128,600 Paragon XP 1996 215,000 ASCI Red source: Alan Edelman http://www-math.mit.edu/~edelman/records.html

12 H. Simon - CS267 - L8 2/9/201612 Sources for large dense linear systems - not many outside CEM - even within CEM community alternatives such FD-TD are heavily debated In many instances choices for algorithms or methods in existing scientific codes or applications are not the result of careful planning and design. At best they are reflecting the start-of-the-art at the time, at worst they are purely coincidental.

13 H. Simon - CS267 - L8 2/9/201613 Review of Gaussian Elimination see Demmel http://HTTP.CS.Berkeley.EDU/~demmel/cs267/lecture12/lecture12.html Gaussian elimination to solve Ax=b - start with a dense matrix - add multiples of each row to subsequent rows in order to create zeros below the diagonal - ending up with an upper triangular matrix U. Solve a linear system with U by substitution, starting with the last variable.

14 H. Simon - CS267 - L8 2/9/201614... for each column i,... zero it out below the diagonal by... adding multiples of row i to later rows for i = 1 to n-1... each row j below row i for j = i+1 to n... add a multiple of row i to row j for k = i to n A(j,k) = A(j,k) - (A(j,i)/A(i,i)) * A(i,k) Review of Gaussian Elimination (cont.)

15 H. Simon - CS267 - L8 2/9/201615 Review of Gaussian Elimination (cont.)

16 H. Simon - CS267 - L8 2/9/201616... for each column i,... zero it out below the diagonal by... adding multiples of row i to later rows for i = 1 to n-1... each row j below row i for j = i+1 to n... add a multiple of row i to row j for k = i to n A(j,k) = A(j,k) - (A(j,i)/A(i,i)) * A(i,k) Review of Gaussian Elimination (cont.) = m

17 H. Simon - CS267 - L8 2/9/201617 Review of Gaussian Elimination (cont.) for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - m * A(i,k) avoid computation of known matrix entry

18 H. Simon - CS267 - L8 2/9/201618 Review of Gaussian Elimination (cont.) It will be convenient to store the multipliers m in the implicitly created zeros below the diagonal, so we can use them later to transform the right hand side b: for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for j = i+1 to n for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

19 H. Simon - CS267 - L8 2/9/201619 Review of Gaussian Elimination (cont.) Now we use Matlab (data parallel) notation to express the algorithm even more compactly: for i = 1 to n-1 A(i+1:n, i) = A(i+1:n, i) / A(i,i) A(i+1:n, i+1:n) = A(i+1:n, i+1:n) - A(i+1:n, i)*A(i, i+1:n) The inner loop consists of one vector operation, and one matrix-vector operation. Note that the loop looks elegant, but no longer intuitive.

20 H. Simon - CS267 - L8 2/9/201620 Review of Gaussian Elimination (cont.)

21 H. Simon - CS267 - L8 2/9/201621 Review of Gaussian Elimination (cont.) Lemma. (LU Factorization). If the above algorithm terminates (i.e. it did not try to divide by zero) then A = L*U. Now we can state our complete algorithm for solving A*x=b: 1) Factorize A = L*U. 2) Solve L*y = b for y by forward substitution. 3) Solve U*x = y for x by backward substitution. Then x is the solution we seek because A*x = L*(U*x) = L*y = b.

22 H. Simon - CS267 - L8 2/9/201622 Here are some obvious problems with this algorithm, which we need to address: - If A(i,i) is zero, the algorithm cannot proceed. If A(i,i) is tiny, we will also have numerical problems. - The majority of the work is done by a rank-one update, which does not exploit a memory hierarchy as well as an operation like matrix-matrix multiplication Review of Gaussian Elimination (cont.)

23 H. Simon - CS267 - L8 2/9/201623 Pivoting for Small A(i,i) Why pivoting is needed? A= [ 0 1 ] [ 1 0 ] Even if A(i,i) is tiny, but not zero difficulties can arise (see example in Jim Demmel’s lecture notes). This problem is resolved by partial pivoting.

24 H. Simon - CS267 - L8 2/9/201624 Partial Pivoting Reordering the rows of A so that A(i,i) is large at each step of the algorithm. At step i of the algorithm, row i is swapped with row k>i if |A(k,i)| is the largest entry among |A(i:n,i)|. for i = 1 to n-1 find and record k where |A(k,i)| = max_{i<=j<=n} |A(j,i)| if |A(k,i)|=0, exit with a warning that A is singular, or nearly so if i != k, swap rows i and k of A A(i+1:n, i) = A(i+1:n, i) / A(i,i)... each quotient lies in [-1,1] A(i+1:n, i+1:n) = A(i+1:n, i+1:n) - A(i+1:n, i)*A(i, i+1:n)

25 H. Simon - CS267 - L8 2/9/201625 Partial Pivoting (cont.) - for 2-by-2 example, we get a very accurate answer - several choices as to when to swap rows i and k - could use indirect addressing and not swap them at all, but this would be slow - keep permutation, then solving A*x=b only requires the additional step of permuting b

26 H. Simon - CS267 - L8 2/9/201626 Fast linear algebra kernels: BLAS - Simple linear algebra kernels such as matrix-matrix multiply (exercise) can be performed fast on memory hierarchies. - More complicated algorithms can be built from some very basic building blocks and kernels. - The interfaces of these kernels have been standardized as the Basic Linear Algebra Subroutines or BLAS. - Early agreement on standard interface (around 1980) led to portable libraries for vector and shared memory parallel machines. - BLAS are classified into three categories, level 1,2,3 see Demmel http://HTTP.CS.Berkeley.EDU/~demmel/cs267/lecture02.html

27 H. Simon - CS267 - L8 2/9/201627 Level 1 BLAS Operate mostly on vectors (1D arrays), or pairs of vectors; perform O(n) operations; return either a vector or a scalar. Examples saxpy y(i) = a * x(i) + y(i), for i=1 to n. Saxpy is an acronym for the operation. S stands for single precision, daxpy is for double precision, caxpy for complex, and zaxpy for double complex, sscal y = a * x, srot replaces vectors x and y by c*x+s*y and -s*x+c*y, where c and s are typically a cosine and sine. sdot computes s = sum_{i=1}^n x(i)*y(i)

28 H. Simon - CS267 - L8 2/9/201628 Level 2 BLAS operate mostly on a matrix (2D array) and a vector; return a matrix or a vector; O(n^2) operations. Examples. sgemv Matrix-vector multiplication computes y = y + A*x where A is m-by-n, x is n-by-1 and y is m-by-1. sger rank-one update computes A = A + y*x', where A is m-by-n, y is m-by-1, x is n-by-1, x' is the transpose of x. This is a short way of saying A(i,j) = A(i,j) + y(i)*x(j) for all i,j. strsv triangular solve solves y=T*x for x, where T is a triangular matrix.

29 H. Simon - CS267 - L8 2/9/201629 Level 3 BLAS operate on pairs or triples of matrices, returning a matrix; complexity is O(n**3). Examples sgemm Matrix-matrix multiplication computes C = C + A*B, where C is m-by-n, A is m-by-k, and B is k-by-n sgtrsm multiple triangular solve solves Y = T*X for X, where T is a triangular matrix, and X is a rectangular matrix.

30 H. Simon - CS267 - L8 2/9/201630 Performance of BLAS Level 2 Level 3 Level 1

31 H. Simon - CS267 - L8 2/9/201631 Performance of BLAS (cont.) - BLAS are specially optimized by the vendor (IBM) to take advantage of all features of the RS 6000/590. - Potentially a big speed advantage if an algorithm can be expressed in terms of the BLAS3 instead of BLAS2 or BLAS1. - The top speed of the BLAS3, about 250 Mflops, is very close to the peak machine speed of 266 Mflops. - We will reorganize algorithms, like Gaussian elimination, so that they use BLAS3 rather than BLAS1 or BLAS2.

32 H. Simon - CS267 - L8 2/9/201632 Explanation of Performance of BLAS m = number of memory references to slow memory (read + write) f = number of floating point operations q = f/m = average number of flops per slow memory reference m justification for m f q saxpy 3*n read x(i), y(i) ; write y(i) 2*n 2/3 sgemv n^2+O(n) read each A(i,j) once 2*n^2 2 sgemm 4*n^2 read A(i,j),B(i,j),C(i,j) 2*n^3 n/2 write C(i,j) once

33 H. Simon - CS267 - L8 2/9/201633 CS 267 Applications of Parallel Processors Lecture 10: Large Dense Linear Systems - Distributed Implementations 2/21/97 Horst D. Simon http://www.cs.berkeley.edu/cs267

34 H. Simon - CS267 - L8 2/9/201634 Review - Lecture 9 - computational electromagnetics and linear systems - rewritten Gaussian elimination as vector and matrix-vector operation (level 2 BLAS) - discussed the efficiency of level 3 BLAS in terms of reducing number of memory accesses

35 H. Simon - CS267 - L8 2/9/201635 Outline - Lecture 10 - Layout of matrices on distributed memory machines - Distributed Gaussian elimination - Speeding up with advanced algorithms - LINPACK and LAPACK - LINPACK benchmark - Tflops result

36 H. Simon - CS267 - L8 2/9/201636 Review of Gaussian Elimination Now we use Matlab (data parallel) notation to express the algorithm even more compactly: for i = 1 to n-1 A(i+1:n, i) = A(i+1:n, i) / A(i,i) A(i+1:n, i+1:n) = A(i+1:n, i+1:n) - A(i+1:n, i)*A(i, i+1:n) The inner loop consists of one vector operation, and one matrix-vector operation. Note that the loop looks elegant, but no longer intuitive.

37 H. Simon - CS267 - L8 2/9/201637 Review of Gaussian Elimination (cont.)

38 H. Simon - CS267 - L8 2/9/201638 Partial Pivoting Reordering the rows of A so that A(i,i) is large at each step of the algorithm. At step i of the algorithm, row i is swapped with row k>i if |A(k,i)| is the largest entry among |A(i:n,i)|. for i = 1 to n-1 find and record k where |A(k,i)| = max_{i<=j<=n} |A(j,i)| if |A(k,i)|=0, exit with a warning that A is singular, or nearly so if i != k, swap rows i and k of A A(i+1:n, i) = A(i+1:n, i) / A(i,i)... each quotient lies in [-1,1] A(i+1:n, i+1:n) = A(i+1:n, i+1:n) - A(i+1:n, i)*A(i, i+1:n)

39 H. Simon - CS267 - L8 2/9/201639 How to Use Level 3 BLAS ? The current algorithm only uses level 1 and level 2 BLAS. Want to use level 3 BLAS because of higher performance. The standard technique is called blocking or delayed updating. We want to save up a sequence of level 2 operations and do them all at once.

40 H. Simon - CS267 - L8 2/9/201640 How to Use Level 3 BLAS in LU Decomposition - process the matrix in blocks of b columns at a time - b is called the block size. - do a complete LU decomposition just of the b columns in the current block, essentially using the above BLAS2 code. - then update the remainder of the matrix doing b rank-one updates all at once, which turns out to be a single matrix-matrix multiplication of size b

41 H. Simon - CS267 - L8 2/9/201641 Block GE with Level 3 BLAS

42 H. Simon - CS267 - L8 2/9/201642 Block GE with Level 3 BLAS Gaussian elimination with Partial Pivoting, BLAS3 implementation... process matrix b columns at a time for ib = 1 to n-1 step b... point to end of block of b columns end = min(ib+b-1,n)... LU factorize A(ib:n,ib:end) with BLAS2 for i = ib to end find and record k where |A(k,i)| = max_{i<=j<=n} |A(j,i)| if |A(k,i)|=0, exit with a warning that A is singular, or nearly so if i != k, swap rows i and k of A A(i+1:n, i) = A(i+1:n, i) / A(i,i)... only update columns i+1 to end A(i+1:n, i+1:end) = A(i+1:n, i+1:end) - A(i+1:n, i)*A(i, i+1:end) endfor

43 H. Simon - CS267 - L8 2/9/201643 Block GE with Level 3 BLAS (cont.)... Let LL be the b-by-b lower triangular... matrix whose subdiagonal entries are... stored in A(ib:end,ib:end), and with... 1s on the diagonal. Do delayed update... of A(ib:end, end+1:n) by solving... n-end triangular systems... (A(ib:end, end+1:n) is pink below) A(ib:end, end+1:n) = LL \ A(ib:end, end+1:n)... do delayed update of rest of matrix... using matrix-matrix multiplication... (A(end+1:n, end+1:n) is green below)... (A(end+1:n, ib:end) is blue below) A(end+1:n, end+1:n) = A(end+1:n, end+1:n) - A(end+1:n,ib:end)*A(ib(end,end+1:n) endfor

44 H. Simon - CS267 - L8 2/9/201644 Block GE with Level 3 BLAS (cont.) - LU factorization of A(ib:n,ib:end) uses the same algorithm as before (level 2 BLAS) - Solving a system of n-end equations with triangular coefficient matrix LL is a single call to a BLAS3 subroutine (strsm) designed for that purpose. - No work or data motion is required to refer to LL; done with a pointer. - When n>>b, almost all the work is done in the final line, which multiplies an (n-end)-by-b matrix times a b-by-(n-end) matrix in a single BLAS3 call (to sgemm).

45 H. Simon - CS267 - L8 2/9/201645 How to select b? b will be chosen in a machine dependent way to maximize performance. A good value of b will have the following properties: - b is small enough so that the b columns currently being LU-factorized fit in the fast memory (cache, say) of the machine. - b is large enough to make matrix-matrix multiplication fast.

46 H. Simon - CS267 - L8 2/9/201646 LINPACK - LAPACK -ScaLAPACK LINPACK - linear systems, least squares problems level 1 BLAS - late 70s LAPACK - redesigned LINPACK to include eigenvalue software, level 3 BLAS for parallel and shared memory parallel machines - late 80s ScaLAPACK - scaleable LAPACK based on BLACS for communication, distributed memory machine - mid 90s

47 H. Simon - CS267 - L8 2/9/201647 Efficiency on Cray C90

48 H. Simon - CS267 - L8 2/9/201648 Comparison of Different Machines Machine #Procs Clock Peak Block Speed Mflops Size b (MHz) --------------------------------------------------------------------- Convex C4640 1 135 810 64 Convex C4640 4 135 3240 64 Cray C90 1 240 952 128 Cray C90 16 240 15238 128 DEC Alpha 3000-500X 1 200 200 32 IBM RS 6000/590 1 66 264 64 SGI Power Challenge 1 75 300 64

49 H. Simon - CS267 - L8 2/9/201649 Efficiency of LAPACK LU, for n=1000

50 H. Simon - CS267 - L8 2/9/201650 Efficiency of LAPACK LU, for n=1000 LU factorization is almost as efficient as matrix-matrix multiply for most machines, except on C90 (16 processors). (why?) LAPACK - LU is almost as good as best vendor effort. Trade-off between performance and portability. Vendors place a premium on LU performance - why?

51 H. Simon - CS267 - L8 2/9/201651 LINPACK Benchmark - named after the LINPACK package - originally consisted of timings for 100-by-100 matrices; no vendor optimization(code changes) permitted - interesting historical record, with literally every machine for the last 2 decades listed in decreasing order of speed, from the largest supercomputers to a hand-held calculator. - as machines grew faster 1000-by-1000 matrices were introduced (all code changes allowed). - a third benchmark was added for large parallel machines, which measured their speed on the largest linear system that would fit in memory, as well as the size of the system required to get half the Mflop rate of the largest matrix.

52 H. Simon - CS267 - L8 2/9/201652 Computer Num_Procs Rmax(GFlops) Nmax(order) N1/2(order) Rpeak(GFlops) --------------------------------------------- --------- ------------ ------------ ------------ ------------- Intel ASCI Option Red (200 MHz Pentium Pro) 7264 1068. 215000 53400 1453 CP-PACS* (150 MHz PA-RISC based CPU) 2048 368.2 103680 30720 614 Intel Paragon XP/S MP (50 MHz OS=SUNMOS) 6768 281.1 128600 25700 338 Intel Paragon XP/S MP (50 MHz OS=SUNMOS) 6144 256.2 122500 24300 307 Numerical Wind Tunnel* (9.5 ns) 167 229.7 66132 18018 281 Intel Paragon XP/S MP (50 MHz OS=SUNMOS) 5376 223.6 114500 22900 269 HITACHI SR2201/1024(150MHz) 1024 220.4 138240 34560 307 Fujitsu VPP500/153(10nsec) 153 200.6 62730 17000 245 Numerical Wind Tunnel* (9.5 ns) 140 195.0 60480 15730 236 Intel Paragon XP/S MP (50 MHz OS=SUNMOS) 4608 191.5 106000 21000 230 Numerical Wind Tunnel* (9.5 ns) 128 179.2 56832 14800 216 LINPACK Benchmark

53 H. Simon - CS267 - L8 2/9/201653 Efficiency of LAPACK LU, for n=100

54 H. Simon - CS267 - L8 2/9/201654 Data Layouts for Distributed Memory Machines The two main issues in choosing a data layout for Gaussian elimination are 1) load balance, or splitting the work reasonably evenly among the processors 2)ability to use the BLAS3 during computations on a single processor, to account for the memory hierarchy on each processor. Several layouts will be discussed here. All these are part of HPF. Solving linear systems served as a prototype for these designs.

55 H. Simon - CS267 - L8 2/9/201655 Gaussian Elimination using BLAS 3

56 H. Simon - CS267 - L8 2/9/201656 Column Blocked column i is stored on processor floor(i/c) where c=ceiling(n/p) is the maximum number of columns stored per processor. does not permit good load balancing. after c columns have been computed processor 0 is idle row blocked has similar problem n=16 and p=4.

57 H. Simon - CS267 - L8 2/9/201657 Column Cyclic each processor owns approximately 1/p-th of the square southeast corner of the matrix good load balance single columns are stored rather than blocks means we cannot use the BLAS3 to update transpose of this layout, the Row Cyclic Layout, has a similar problem.

58 H. Simon - CS267 - L8 2/9/201658 Column Block Cyclic choose a block size b, divide the columns into groups of size b, distribute these groups cyclically for b >1, slightly worse balance than the Column Cyclic Layout; can use the BLAS2 and BLAS3 b < c, better load balance than the Columns Blocked Layout, but can only call the BLAS on smaller subproblems, take less advantage of the local memory hierarchy disadvantage that the factorization of A(ib:n,ib:end) will take place on perhaps just on one processor; possible serial bottleneck. n=16, p=4 and b=2 b not necessarily BLAS3 block size

59 H. Simon - CS267 - L8 2/9/201659 Row and Column Block Cyclic processors and matrix blocks are distributed in a 2d array pcol-fold parallelism in any column, and calls to the BLAS2 and BLAS3 on matrices of size brow-by-bcol serial bottleneck is eased need not be symmetric in rows and columns

60 H. Simon - CS267 - L8 2/9/201660 Skewered Block each row and each column is shared among all p processors so p-fold parallelism is available for any row operation or any column operation in contrast, the 2D block cyclic layout can have at most sqrt(p)-fold parallelism in all the rows and all the columns not useful for Gaussian elimination, but in a variety of other matrix operations

61 H. Simon - CS267 - L8 2/9/201661 Distributed GE with a 2D Block Cyclic Layout block size b in the algorithm and the block sizes brow and bcol in the layout satisfy b=brow=bcol. shaded regions indicate busy processors or communication performed. unnecessary to have a barrier between each step of the algorithm, e.g.. step 9, 10, and 11 can be pipelined

62 H. Simon - CS267 - L8 2/9/201662 Distributed GE with a 2D Block Cyclic Layout

63 H. Simon - CS267 - L8 2/9/201663

64 H. Simon - CS267 - L8 2/9/201664 ScaLAPACK LU Performance Results

65 H. Simon - CS267 - L8 2/9/201665 Teraflop/s Performance Result “Sorry for the delay in responding. The system had about 7000 200Mhz Pentium Pro Processors. It solved a 64bit real matrix of size 216000. It did not use Strassen. The algorithm was basically the same that Robert van de Geijn used on the Delta years ago. It does a 2D block cyclic map of the matrix and requires a power of 2 number of nodes in the vertical direction. The basic block size was 64x64. A custom dual processor matrix multiply was written for the DGEMM call. It took a little less than 2 hours to run.”


Download ppt "H. Simon - CS267 - L8 2/9/20161 CS 267 Applications of Parallel Processors Lecture 9: Computational Electromagnetics - Large Dense Linear Systems 2/19/97."

Similar presentations


Ads by Google