CS267 Dense Linear Algebra I.1 Demmel Fa 2001 CS 267 Applications of Parallel Computers Dense Linear Algebra James Demmel

Slides:



Advertisements
Similar presentations
Dense Linear Algebra (Data Distributions) Sathish Vadhiyar.
Advertisements

Load Balancing Parallel Applications on Heterogeneous Platforms.
Lecture 19: Parallel Algorithms
Dense Matrix Algorithms. Topic Overview Matrix-Vector Multiplication Matrix-Matrix Multiplication Solving a System of Linear Equations.
CS 484. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
Numerical Algorithms ITCS 4/5145 Parallel Computing UNC-Charlotte, B. Wilkinson, 2009.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Communication Avoiding Algorithms for Dense Linear Algebra Jim Demmel CS294, Lecture #4 Fall, 2011 Communication-Avoiding Algorithms
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Numerical Algorithms Matrix multiplication
CS 140 : Matrix multiplication Linear algebra problems Matrix multiplication I : cache issues Matrix multiplication II: parallel issues Thanks to Jim Demmel.
9/12/2007CS194 Lecture1 Shared Memory Hardware: Case Study in Matrix Multiplication Kathy Yelick
CS 240A : Matrix multiplication Matrix multiplication I : parallel issues Matrix multiplication II: cache issues Thanks to Jim Demmel and Kathy Yelick.
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
Numerical Algorithms • Matrix multiplication
1cs542g-term Notes  Assignment 1 is out (questions?)
Linear Algebraic Equations
02/14/2006CS267 Lecture 91 CS 267 Dense Linear Algebra: Parallel Gaussian Elimination James Demmel
CS267 L20 Dense Linear Algebra II.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 20: Dense Linear Algebra - II James Demmel
CS267 L19 Dense Linear Algebra I.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 19: Dense Linear Algebra - I James Demmel
02/21/2007CS267 Lecture DLA11 CS 267 Dense Linear Algebra: Parallel Matrix Multiplication James Demmel
CS267 Dense Linear Algebra I.1 Demmel Fa 2002 CS 267 Applications of Parallel Computers Dense Linear Algebra James Demmel
02/09/2006CS267 Lecture 81 CS 267 Dense Linear Algebra: Parallel Matrix Multiplication James Demmel
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
CS 240A: Solving Ax = b in parallel °Dense A: Gaussian elimination with partial pivoting Same flavor as matrix * matrix, but more complicated °Sparse A:
CS 584. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Dense Matrix Algorithms CS 524 – High-Performance Computing.
02/14/2006CS267 Lecture 91 CS 267 Dense Linear Algebra: Parallel Gaussian Elimination James Demmel
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Scientific Computing Linear Systems – LU Factorization.
2/25/2009CS267 Lecture 101 Parallelism and Locality in Matrix Computations Dense Linear Algebra: Optimizing Parallel.
High Performance Computing 1 Numerical Linear Algebra An Introduction.
1 Minimizing Communication in Numerical Linear Algebra Case Study: Matrix Multiply Jim Demmel EECS & Math Departments, UC Berkeley.
CS 140 : Matrix multiplication Warmup: Matrix times vector: communication volume Matrix multiplication I: parallel issues Matrix multiplication II: cache.
Dense Linear Algebra Sathish Vadhiyar. Gaussian Elimination - Review Version 1 for each column i zero it out below the diagonal by adding multiples of.
3/3/2008CS267 Guest Lecture 21 CS 267 Dense Linear Algebra: Parallel Gaussian Elimination James Demmel
By: David McQuilling and Jesus Caban Numerical Linear Algebra.
1 Minimizing Communication in Numerical Linear Algebra Optimizing One-sided Factorizations: LU and QR Jim Demmel EECS & Math.
Chapter 3 Solution of Algebraic Equations 1 ChE 401: Computational Techniques for Chemical Engineers Fall 2009/2010 DRAFT SLIDES.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
CS240A: Conjugate Gradients and the Model Problem.
3/02/2010CS267 Lecture 131 CS 267 Dense Linear Algebra: History and Structure, Parallel Matrix Multiplication James Demmel
2/25/2009CS267 Lecture 101 CS 267 Dense Linear Algebra: History and Structure, Parallel Matrix Multiplication James Demmel
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
Dense Linear Algebra Sathish Vadhiyar. Gaussian Elimination - Review Version 1 for each column i zero it out below the diagonal by adding multiples of.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
High Performance LU Factorization for Non-dedicated Clusters Toshio Endo, Kenji Kaneda, Kenjiro Taura, Akinori Yonezawa (University of Tokyo) and the future.
H. Simon - CS267 - L8 2/9/20161 CS 267 Applications of Parallel Processors Lecture 9: Computational Electromagnetics - Large Dense Linear Systems 2/19/97.
Linear Systems Dinesh A.
09/13/2012CS4230 CS4230 Parallel Programming Lecture 8: Dense Linear Algebra and Locality Optimizations Mary Hall September 13,
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Numerical Algorithms Chapter 11.
Linear Equations.
CS 267 Dense Linear Algebra: Parallel Matrix Multiplication
CS 267 Dense Linear Algebra: Parallel Gaussian Elimination
BLAS: behind the scenes
Lecture 22: Parallel Algorithms
Kathy Yelick CS 267 Applications of Parallel Processors Lecture 13: Parallel Matrix Multiply Kathy Yelick
CS 140 : Matrix multiplication
P A R A L L E L C O M P U T I N G L A B O R A T O R Y
Parallel Matrix Operations
Numerical Algorithms • Parallelizing matrix multiplication
James Demmel CS 267 Applications of Parallel Computers Lecture 19: Dense Linear Algebra - I James Demmel.
Dense Linear Algebra (Data Distributions)
To accompany the text “Introduction to Parallel Computing”,
CS 140 : Matrix multiplication
Parallel Matrix Multiply
Presentation transcript:

CS267 Dense Linear Algebra I.1 Demmel Fa 2001 CS 267 Applications of Parallel Computers Dense Linear Algebra James Demmel

CS267 Dense Linear Algebra I.2 Demmel Fa 2001 Outline °Motivation for Dense Linear Algebra (as opposed to sparse) °Benchmarks °Review Gaussian Elimination (GE) for solving Ax=b °Optimizing GE for caches on sequential machines using matrix-matrix multiplication (BLAS) °LAPACK library overview and performance °Data layouts on parallel machines °Parallel matrix-matrix multiplication review °Parallel Gaussian Elimination °ScaLAPACK library overview °Eigenvalue problems °Open Problems

CS267 Dense Linear Algebra I.3 Demmel Fa 2001 Motivation °3 Basic Linear Algebra Problems Linear Equations: Solve Ax=b for x Least Squares: Find x that minimizes  r i 2 where r=Ax-b Eigenvalues: Find and x where Ax = x Lots of variations depending on structure of A (eg symmetry) °Why dense A, as opposed to sparse A? Aren’t “most” large matrices sparse? Dense algorithms easier to understand Some applications yields large dense matrices -Ax=b: Computational Electromagnetics -Ax = x: Quantum Chemistry Benchmarking -“How fast is your computer?” = “How fast can you solve dense Ax=b?” Large sparse matrix algorithms often yield smaller (but still large) dense problems

CS267 Dense Linear Algebra I.4 Demmel Fa 2001 Computational Electromagnetics – Solve Ax=b Developed during 1980s, driven by defense applications Determine the RCS (radar cross section) of airplane Reduce signature of plane (stealth technology) Other applications are antenna design, medical equipment Two fundamental numerical approaches: MOM methods of moments ( frequency domain) Large dense matrices Finite differences (time domain) Even larger sparse matrices

CS267 Dense Linear Algebra I.5 Demmel Fa 2001 Computational Electromagnetics image: NW Univ. Comp. Electromagnetics Laboratory - Discretize surface into triangular facets using standard modeling tools - Amplitude of currents on surface are unknowns - Integral equation is discretized into a set of linear equations

CS267 Dense Linear Algebra I.6 Demmel Fa 2001 Computational Electromagnetics (MOM) After discretization the integral equation has the form A x = b where A is the (dense) impedance matrix, x is the unknown vector of amplitudes, and b is the excitation vector. (see Cwik, Patterson, and Scott, Electromagnetic Scattering on the Intel Touchstone Delta, IEEE Supercomputing ‘92, pp )

CS267 Dense Linear Algebra I.7 Demmel Fa 2001 Results for Parallel Implementation on Intel Delta Task Time (hours) Fill (compute n 2 matrix entries) 9.20 (embarrassingly parallel but slow) Factor (Gaussian Elimination, O(n 3 ) ) 8.25 (good parallelism with right algorithm) Solve (O(n 2 )) 2.17 (reasonable parallelism with right algorithm) Field Calc. (O(n)) 0.12 (embarrassingly parallel and fast) The problem solved was for a matrix of size 48, Gflops for Factor - The world record in 1991.

CS267 Dense Linear Algebra I.8 Demmel Fa 2001 Current Records for Solving Dense Systems Year System Size Machine # Procs Gflops (Peak) 1950's O(100) ,600 Intel Paragon ( 338) ,000 Intel ASCI Red (1453) ,000 Cray T3E (1786) ,000 Intel ASCI Red (1830) (200 MHz Ppro) ,000 SGI ASCI Blue (2520) ,000 Intel ASCI Red (3207) (333 MHz Xeon) ,000 IBM ASCI White (12000) (375 MHz Power 3) source: Alan Edelman LINPACK Benchmark: click on Performance DataBase ServerPerformance DataBase Server

CS267 Dense Linear Algebra I.9 Demmel Fa 2001 Current Records for Solving Small Dense Systems Megaflops Machine n=100 n=1000 Peak Fujitsu VPP (1 proc 300 MHz) Cray T90 (32 proc, 450 MHz) (4 proc, 450 MHz) IBM Power 4 (1 proc, 1300 MHz) … Dell Itanium (4 proc, 800 MHz) (2 proc, 800 MHz) (1 proc, 800 MHz) source: LINPACK Benchmark: click on Performance DataBase ServerPerformance DataBase Server

CS267 Dense Linear Algebra I.10 Demmel Fa 2001 Computational Chemistry – Ax = x °Seek energy levels of a molecule, crystal, etc. Solve Schroedinger’s Equation for energy levels = eigenvalues Discretize to get Ax = Bx, solve for eigenvalues and eigenvectors x A and B large, symmetric or Hermitian matrices (B positive definite) May want some or all eigenvalues and/or eigenvectors °MP-Quest (Sandia NL) Si and sapphire crystals of up to 3072 atoms Local Density Approximation to Schroedinger Equation A and B up to n=40000, complex Hermitian Need all eigenvalues and eigenvectors Need to iterate up to 20 times (for self-consistency) °Implemented on Intel ASCI Red 9200 Pentium Pro 200 processors (4600 Duals, a CLUMP) Overall application ran at 605 Gflops (out of 1800 Gflops peak), Eigensolver ran at 684 Gflops Runner-up for Gordon Bell Prize at Supercomputing 98 °Same problem arises in astrophysics...

CS267 Dense Linear Algebra I.11 Demmel Fa 2001 Review of Gaussian Elimination (GE) for solving Ax=b °Add multiples of each row to later rows to make A upper triangular °Solve resulting triangular system Ux = c by substitution … for each column i … zero it out below the diagonal by adding multiples of row i to later rows for i = 1 to n-1 … for each row j below row i for j = i+1 to n … add a multiple of row i to row j for k = i to n A(j,k) = A(j,k) - (A(j,i)/A(i,i)) * A(i,k)

CS267 Dense Linear Algebra I.12 Demmel Fa 2001 Refine GE Algorithm (1) °Initial Version °Remove computation of constant A(j,i)/A(i,i) from inner loop … for each column i … zero it out below the diagonal by adding multiples of row i to later rows for i = 1 to n-1 … for each row j below row i for j = i+1 to n … add a multiple of row i to row j for k = i to n A(j,k) = A(j,k) - (A(j,i)/A(i,i)) * A(i,k) for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i to n A(j,k) = A(j,k) - m * A(i,k)

CS267 Dense Linear Algebra I.13 Demmel Fa 2001 Refine GE Algorithm (2) °Last version °Don’t compute what we already know: zeros below diagonal in column i for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - m * A(i,k) for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i to n A(j,k) = A(j,k) - m * A(i,k)

CS267 Dense Linear Algebra I.14 Demmel Fa 2001 Refine GE Algorithm (3) °Last version °Store multipliers m below diagonal in zeroed entries for later use for i = 1 to n-1 for j = i+1 to n m = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - m * A(i,k) for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

CS267 Dense Linear Algebra I.15 Demmel Fa 2001 Refine GE Algorithm (4) °Last version for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k) o Split Loop for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for j = i+1 to n for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

CS267 Dense Linear Algebra I.16 Demmel Fa 2001 Refine GE Algorithm (5) °Last version °Express using matrix operations (BLAS) for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) * ( 1 / A(i,i) ) A(i+1:n,i+1:n) = A(i+1:n, i+1:n ) - A(i+1:n, i) * A(i, i+1:n) for i = 1 to n-1 for j = i+1 to n A(j,i) = A(j,i)/A(i,i) for j = i+1 to n for k = i+1 to n A(j,k) = A(j,k) - A(j,i) * A(i,k)

CS267 Dense Linear Algebra I.17 Demmel Fa 2001 What GE really computes °Call the strictly lower triangular matrix of multipliers M, and let L = I+M °Call the upper triangle of the final matrix U °Lemma (LU Factorization): If the above algorithm terminates (does not divide by zero) then A = L*U °Solving A*x=b using GE Factorize A = L*U using GE (cost = 2/3 n 3 flops) Solve L*y = b for y, using substitution (cost = n 2 flops) Solve U*x = y for x, using substitution (cost = n 2 flops) °Thus A*x = (L*U)*x = L*(U*x) = L*y = b as desired for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) A(i+1:n,i+1:n) = A(i+1:n, i+1:n ) - A(i+1:n, i) * A(i, i+1:n)

CS267 Dense Linear Algebra I.18 Demmel Fa 2001 Problems with basic GE algorithm °What if some A(i,i) is zero? Or very small? Result may not exist, or be “unstable”, so need to pivot °Current computation all BLAS 1 or BLAS 2, but we know that BLAS 3 (matrix multiply) is fastest (earlier lectures…) for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) … BLAS 1 (scale a vector) A(i+1:n,i+1:n) = A(i+1:n, i+1:n ) … BLAS 2 (rank-1 update) - A(i+1:n, i) * A(i, i+1:n) Peak BLAS 3 BLAS 2 BLAS 1

CS267 Dense Linear Algebra I.19 Demmel Fa 2001 Pivoting in Gaussian Elimination ° A = [ 0 1 ] fails completely, even though A is “easy” [ 1 0 ] ° Illustrate problems in 3-decimal digit arithmetic: A = [ 1e-4 1 ] and b = [ 1 ], correct answer to 3 places is x = [ 1 ] [ 1 1 ] [ 2 ] [ 1 ] ° Result of LU decomposition is L = [ 1 0 ] = [ 1 0 ] … No roundoff error yet [ fl(1/1e-4) 1 ] [ 1e4 1 ] U = [ 1e-4 1 ] = [ 1e-4 1 ] … Error in 4th decimal place [ 0 fl(1-1e4*1) ] [ 0 -1e4 ] Check if A = L*U = [ 1e-4 1 ] … (2,2) entry entirely wrong [ 1 0 ] ° Algorithm “forgets” (2,2) entry, gets same L and U for all |A(2,2)|<5 ° Numerical instability ° Computed solution x totally inaccurate ° Cure: Pivot (swap rows of A) so entries of L and U bounded

CS267 Dense Linear Algebra I.20 Demmel Fa 2001 Gaussian Elimination with Partial Pivoting (GEPP) ° Partial Pivoting: swap rows so that each multiplier |L(i,j)| = |A(j,i)/A(i,i)| <= 1 for i = 1 to n-1 find and record k where |A(k,i)| = max {i <= j <= n} |A(j,i)| … i.e. largest entry in rest of column i if |A(k,i)| = 0 exit with a warning that A is singular, or nearly so elseif k != i swap rows i and k of A end if A(i+1:n,i) = A(i+1:n,i) / A(i,i) … each quotient lies in [-1,1] A(i+1:n,i+1:n) = A(i+1:n, i+1:n ) - A(i+1:n, i) * A(i, i+1:n) ° Lemma: This algorithm computes A = P*L*U, where P is a permutation matrix ° Since each entry of |L(i,j)| <= 1, this algorithm is considered numerically stable ° For details see LAPACK code at

CS267 Dense Linear Algebra I.21 Demmel Fa 2001 Converting BLAS2 to BLAS3 in GEPP °Blocking Used to optimize matrix-multiplication Harder here because of data dependencies in GEPP °Delayed Updates Save updates to “trailing matrix” from several consecutive BLAS2 updates Apply many saved updates simultaneously in one BLAS3 operation °Same idea works for much of dense linear algebra Open questions remain °Need to choose a block size b Algorithm will save and apply b updates b must be small enough so that active submatrix consisting of b columns of A fits in cache b must be large enough to make BLAS3 fast

CS267 Dense Linear Algebra I.22 Demmel Fa 2001 Blocked GEPP ( for ib = 1 to n-1 step b … Process matrix b columns at a time end = ib + b-1 … Point to end of block of b columns apply BLAS2 version of GEPP to get A(ib:n, ib:end) = P’ * L’ * U’ … let LL denote the strict lower triangular part of A(ib:end, ib:end) + I A(ib:end, end+1:n) = LL -1 * A(ib:end, end+1:n) … update next b rows of U A(end+1:n, end+1:n ) = A(end+1:n, end+1:n ) - A(end+1:n, ib:end) * A(ib:end, end+1:n) … apply delayed updates with single matrix-multiply … with inner dimension b (For a correctness proof, see on-lines notes.)

CS267 Dense Linear Algebra I.23 Demmel Fa 2001 Efficiency of Blocked GEPP

CS267 Dense Linear Algebra I.24 Demmel Fa 2001 Overview of LAPACK °Standard library for dense/banded linear algebra Linear systems: A*x=b Least squares problems: min x || A*x-b || 2 Eigenvalue problems: Ax =  x, Ax = Bx Singular value decomposition (SVD): A = U  V T °Algorithms reorganized to use BLAS3 as much as possible °Basis of math libraries on many computers, Matlab 6 °Many algorithmic innovations remain Projects available Automatic optimization Quadtree matrix data structures for locality New eigenvalue algorithms

CS267 Dense Linear Algebra I.25 Demmel Fa 2001 Performance of LAPACK (n=1000)

CS267 Dense Linear Algebra I.26 Demmel Fa 2001 Performance of LAPACK (n=100)

CS267 Dense Linear Algebra I.27 Demmel Fa 2001 Recursive Algorithms °Still uses delayed updates, but organized differently (formulas on board) °Can exploit recursive data layouts 3x speedups on least squares for tall, thin matrices °Theoretically optimal memory hierarchy performance °See references at

CS267 Dense Linear Algebra I.28 Demmel Fa 2001 Recursive Algorithms – Limits °Two kinds of dense matrix compositions °One Sided Sequence of simple operations applied on left of matrix Gaussian Elimination: A = L*U or A = P*L*U -Symmetric Gaussian Elimination: A = L*D*L T -Cholesky: A = L*L T QR Decomposition for Least Squares: A = Q*R Can be nearly 100% BLAS 3 Susceptible to recursive algorithms °Two Sided Sequence of simple operations applied on both sides, alternating Eigenvalue algorithms, SVD At least ~25% BLAS 2 Seem impervious to recursive approach Some recent progress on SVD (25% vs 50% BLAS2)

CS267 Dense Linear Algebra I.29 Demmel Fa 2001 Parallelizing Gaussian Elimination °Recall parallelization steps from earlier lecture Decomposition: identify enough parallel work, but not too much Assignment: load balance work among threads Orchestrate: communication and synchronization Mapping: which processors execute which threads °Decomposition In BLAS 2 algorithm nearly each flop in inner loop can be done in parallel, so with n 2 processors, need 3n parallel steps This is too fine-grained, prefer calls to local matmuls instead Need to discuss parallel matrix multiplication °Assignment Which processors are responsible for which submatrices? for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) … BLAS 1 (scale a vector) A(i+1:n,i+1:n) = A(i+1:n, i+1:n ) … BLAS 2 (rank-1 update) - A(i+1:n, i) * A(i, i+1:n)

CS267 Dense Linear Algebra I.30 Demmel Fa 2001 Different Data Layouts for Parallel GE (on 4 procs) The winner! Bad load balance: P0 idle after first n/4 steps Load balanced, but can’t easily use BLAS2 or BLAS3 Can trade load balance and BLAS2/3 performance by choosing b, but factorization of block column is a bottleneck Complicated addressing

CS267 Dense Linear Algebra I.31 Demmel Fa 2001 PDGEMM = PBLAS routine for matrix multiply Observations: For fixed N, as P increases Mflops increases, but less than 100% efficiency For fixed P, as N increases, Mflops (efficiency) rises DGEMM = BLAS routine for matrix multiply Maximum speed for PDGEMM = # Procs * speed of DGEMM Observations (same as above): Efficiency always at least 48% For fixed N, as P increases, efficiency drops For fixed P, as N increases, efficiency increases

CS267 Dense Linear Algebra I.32 Demmel Fa 2001 Review: BLAS 3 (Blocked) GEPP for ib = 1 to n-1 step b … Process matrix b columns at a time end = ib + b-1 … Point to end of block of b columns apply BLAS2 version of GEPP to get A(ib:n, ib:end) = P’ * L’ * U’ … let LL denote the strict lower triangular part of A(ib:end, ib:end) + I A(ib:end, end+1:n) = LL -1 * A(ib:end, end+1:n) … update next b rows of U A(end+1:n, end+1:n ) = A(end+1:n, end+1:n ) - A(end+1:n, ib:end) * A(ib:end, end+1:n) … apply delayed updates with single matrix-multiply … with inner dimension b BLAS 3

CS267 Dense Linear Algebra I.33 Demmel Fa 2001 Review: Row and Column Block Cyclic Layout processors and matrix blocks are distributed in a 2d array pcol-fold parallelism in any column, and calls to the BLAS2 and BLAS3 on matrices of size brow-by-bcol serial bottleneck is eased need not be symmetric in rows and columns

CS267 Dense Linear Algebra I.34 Demmel Fa 2001 Distributed GE with a 2D Block Cyclic Layout block size b in the algorithm and the block sizes brow and bcol in the layout satisfy b=brow=bcol. shaded regions indicate busy processors or communication performed. unnecessary to have a barrier between each step of the algorithm, e.g.. step 9, 10, and 11 can be pipelined

CS267 Dense Linear Algebra I.35 Demmel Fa 2001 Distributed GE with a 2D Block Cyclic Layout

CS267 Dense Linear Algebra I.36 Demmel Fa 2001 Matrix multiply of green = green - blue * pink

CS267 Dense Linear Algebra I.37 Demmel Fa 2001 PDGESV = ScaLAPACK parallel LU routine Since it can run no faster than its inner loop (PDGEMM), we measure: Efficiency = Speed(PDGESV)/Speed(PDGEMM) Observations: Efficiency well above 50% for large enough problems For fixed N, as P increases, efficiency decreases (just as for PDGEMM) For fixed P, as N increases efficiency increases (just as for PDGEMM) From bottom table, cost of solving Ax=b about half of matrix multiply for large enough matrices. From the flop counts we would expect it to be (2*n 3 )/(2/3*n 3 ) = 3 times faster, but communication makes it a little slower.

CS267 Dense Linear Algebra I.38 Demmel Fa 2001

CS267 Dense Linear Algebra I.39 Demmel Fa 2001

CS267 Dense Linear Algebra I.40 Demmel Fa 2001 Scales well, nearly full machine speed

CS267 Dense Linear Algebra I.41 Demmel Fa 2001 Old version, pre 1998 Gordon Bell Prize Still have ideas to accelerate Project Available! Old Algorithm, plan to abandon

CS267 Dense Linear Algebra I.42 Demmel Fa 2001 The “Holy Grail” of Eigensolvers for Symmetric matrices To be propagated throughout LAPACK and ScaLAPACK

CS267 Dense Linear Algebra I.43 Demmel Fa 2001 Have good ideas to speedup Project available! Hardest of all to parallelize

CS267 Dense Linear Algebra I.44 Demmel Fa 2001 Out-of-core means matrix lives on disk; too big for main mem Much harder to hide latency of disk QR much easier than LU because no pivoting needed for QR

CS267 Dense Linear Algebra I.45 Demmel Fa 2001 A small software project...

CS267 Dense Linear Algebra I.46 Demmel Fa 2001 Extra Slides

CS267 Dense Linear Algebra I.47 Demmel Fa 2001 Parallelizing Gaussian Elimination °Recall parallelization steps from Lecture 3 Decomposition: identify enough parallel work, but not too much Assignment: load balance work among threads Orchestrate: communication and synchronization Mapping: which processors execute which threads °Decomposition In BLAS 2 algorithm nearly each flop in inner loop can be done in parallel, so with n 2 processors, need 3n parallel steps This is too fine-grained, prefer calls to local matmuls instead for i = 1 to n-1 A(i+1:n,i) = A(i+1:n,i) / A(i,i) … BLAS 1 (scale a vector) A(i+1:n,i+1:n) = A(i+1:n, i+1:n ) … BLAS 2 (rank-1 update) - A(i+1:n, i) * A(i, i+1:n)

CS267 Dense Linear Algebra I.48 Demmel Fa 2001 Assignment of parallel work in GE °Think of assigning submatrices to threads, where each thread responsible for updating submatrix it owns “owner computes” rule natural because of locality °What should submatrices look like to achieve load balance?

CS267 Dense Linear Algebra I.49 Demmel Fa 2001 Different Data Layouts for Parallel GE (on 4 procs) The winner! Bad load balance: P0 idle after first n/4 steps Load balanced, but can’t easily use BLAS2 or BLAS3 Can trade load balance and BLAS2/3 performance by choosing b, but factorization of block column is a bottleneck Complicated addressing

CS267 Dense Linear Algebra I.50 Demmel Fa 2001 The main steps in the solution process are Fill: computing the matrix elements of A Factor: factoring the dense matrix A Solve: solving for one or more excitations b Field Calc: computing the fields scattered from the object Computational Electromagnetics (MOM)

CS267 Dense Linear Algebra I.51 Demmel Fa 2001 Analysis of MOM for Parallel Implementation Task Work Parallelism Parallel Speed Fill O(n**2) embarrassing low Factor O(n**3) moderately diff. very high Solve O(n**2) moderately diff. high Field Calc. O(n) embarrassing high

CS267 Dense Linear Algebra I.52 Demmel Fa 2001 BLAS 3 (Blocked) GEPP, using Delayed Updates for ib = 1 to n-1 step b … Process matrix b columns at a time end = ib + b-1 … Point to end of block of b columns apply BLAS2 version of GEPP to get A(ib:n, ib:end) = P’ * L’ * U’ … let LL denote the strict lower triangular part of A(ib:end, ib:end) + I A(ib:end, end+1:n) = LL -1 * A(ib:end, end+1:n) … update next b rows of U A(end+1:n, end+1:n ) = A(end+1:n, end+1:n ) - A(end+1:n, ib:end) * A(ib:end, end+1:n) … apply delayed updates with single matrix-multiply … with inner dimension b BLAS 3

CS267 Dense Linear Algebra I.53 Demmel Fa 2001 BLAS2 version of Gaussian Elimination with Partial Pivoting (GEPP ) for i = 1 to n-1 find and record k where |A(k,i)| = max {i <= j <= n} |A(j,i)| … i.e. largest entry in rest of column i if |A(k,i)| = 0 exit with a warning that A is singular, or nearly so elseif k != i swap rows i and k of A end if A(i+1:n,i) = A(i+1:n,i) / A(i,i) … each quotient lies in [-1,1] … BLAS 1 A(i+1:n,i+1:n) = A(i+1:n, i+1:n ) - A(i+1:n, i) * A(i, i+1:n) … BLAS 2, most work in this line

CS267 Dense Linear Algebra I.54 Demmel Fa 2001 How to proceed: °Consider basic parallel matrix multiplication algorithms on simple layouts Performance modeling to choose best one -Time (message) = latency + #words * time-per-word - =  + n*  °Briefly discuss block-cyclic layout °PBLAS = Parallel BLAS

CS267 Dense Linear Algebra I.55 Demmel Fa 2001 Parallel Matrix Multiply °Computing C=C+A*B °Using basic algorithm: 2*n 3 Flops °Variables are: Data layout Topology of machine Scheduling communication °Use of performance models for algorithm design

CS267 Dense Linear Algebra I.56 Demmel Fa D Layout °Assume matrices are n x n and n is divisible by p °A(i) refers to the n by n/p block column that processor i owns (similiarly for B(i) and C(i)) °B(i,j) is the n/p by n/p sublock of B(i) in rows j*n/p through (j+1)*n/p °Algorithm uses the formula C(i) = C(i) + A*B(i) = C(i) +  A(j)*B(j,i) p0p1p2p3p5p4p6p7 j

CS267 Dense Linear Algebra I.57 Demmel Fa 2001 Matrix Multiply: 1D Layout on Bus or Ring °Algorithm uses the formula C(i) = C(i) + A*B(i) = C(i) +  A(j)*B(j,i) °First consider a bus-connected machine without broadcast: only one pair of processors can communicate at a time (ethernet) °Second consider a machine with processors on a ring: all processors may communicate with nearest neighbors simultaneously j

CS267 Dense Linear Algebra I.58 Demmel Fa 2001 Naïve MatMul for 1D layout on Bus without Broadcast Naïve algorithm: C(myproc) = C(myproc) + A(myproc)*B(myproc,myproc) for i = 0 to p-1 for j = 0 to p-1 except i if (myproc == i) send A(i) to processor j if (myproc == j) receive A(i) from processor i C(myproc) = C(myproc) + A(i)*B(i,myproc) barrier Cost of inner loop: computation: 2*n*(n/p) 2 = 2*n 3 /p 2 communication:  +  *n 2 /p

CS267 Dense Linear Algebra I.59 Demmel Fa 2001 Naïve MatMul (continued) Cost of inner loop: computation: 2*n*(n/p) 2 = 2*n 3 /p 2 communication:  +  *n 2 /p … approximately Only 1 pair of processors (i and j) are active on any iteration, an of those, only i is doing computation => the algorithm is almost entirely serial Running time: (p*(p-1) + 1)*computation + p*(p-1)*communication ~= 2*n 3 + p 2 *  + p*n 2 *  this is worse than the serial time and grows with p

CS267 Dense Linear Algebra I.60 Demmel Fa 2001 Better Matmul for 1D layout on a Processor Ring ° Proc i can communicate with Proc(i-1) and Proc(i+1) simultaneously for all i Copy A(myproc) into Tmp C(myproc) = C(myproc) + T*B(myproc, myproc) for j = 1 to p-1 Send Tmp to processor myproc+1 mod p Receive Tmp from processor myproc-1 mod p C(myproc) = C(myproc) + Tmp*B( myproc-j mod p, myproc) ° Same idea as for gravity in simple sharks and fish algorithm ° Time of inner loop = 2*(  +  *n 2 /p) + 2*n*(n/p) 2 ° Total Time = 2*n* (n/p) 2 + (p-1) * Time of inner loop ~ 2*n 3 /p + 2*p*  + 2*  *n 2 ° Optimal for 1D layout on Ring or Bus, even with with Broadcast: Perfect speedup for arithmetic A(myproc) must move to each other processor, costs at least (p-1)*cost of sending n*(n/p) words ° Parallel Efficiency = 2*n 3 / (p * Total Time) = 1/(1 +  * p 2 /(2*n 3 ) +  * p/(2*n) ) = 1/ (1 + O(p/n)) Grows to 1 as n/p increases (or  and  shrink)

CS267 Dense Linear Algebra I.61 Demmel Fa 2001 MatMul with 2D Layout °Consider processors in 2D grid (physical or logical) °Processors can communicate with 4 nearest neighbors Broadcast along rows and columns p(0,0) p(0,1) p(0,2) p(1,0) p(1,1) p(1,2) p(2,0) p(2,1) p(2,2)

CS267 Dense Linear Algebra I.62 Demmel Fa 2001 Cannon’s Algorithm … C(i,j) = C(i,j) +  A(i,k)*B(k,j) … assume s = sqrt(p) is an integer forall i=0 to s-1 … “skew” A left-circular-shift row i of A by i … so that A(i,j) overwritten by A(i,(j+i)mod s) forall i=0 to s-1 … “skew” B up-circular-shift B column i of B by i … so that B(i,j) overwritten by B((i+j)mod s), j) for k=0 to s-1 … sequential forall i=0 to s-1 and j=0 to s-1 … all processors in parallel C(i,j) = C(i,j) + A(i,j)*B(i,j) left-circular-shift each row of A by 1 up-circular-shift each row of B by 1 k

CS267 Dense Linear Algebra I.63 Demmel Fa 2001 Communication in Cannon C(1,2) = A(1,0) * B(0,2) + A(1,1) * B(1,2) + A(1,2) * B(2,2)

CS267 Dense Linear Algebra I.64 Demmel Fa 2001 Cost of Cannon’s Algorithm forall i=0 to s-1 … recall s = sqrt(p) left-circular-shift row i of A by i … cost = s*(  +  *n 2 /p) forall i=0 to s-1 up-circular-shift B column i of B by i … cost = s*(  +  *n 2 /p) for k=0 to s-1 forall i=0 to s-1 and j=0 to s-1 C(i,j) = C(i,j) + A(i,j)*B(i,j) … cost = 2*(n/s) 3 = 2*n 3 /p 3/2 left-circular-shift each row of A by 1 … cost =  +  *n 2 /p up-circular-shift each row of B by 1 … cost =  +  *n 2 /p ° Total Time = 2*n 3 /p + 4 * s *  + 4*  *n 2 /s ° Parallel Efficiency = 2*n 3 / (p * Total Time) = 1/( 1 +  * 2*(s/n) 3 +  * 2*(s/n) ) = 1/(1 + O(sqrt(p)/n)) ° Grows to 1 as n/s = n/sqrt(p) = sqrt(data per processor) grows ° Better than 1D layout, which had Efficiency = 1/(1 + O(p/n))

CS267 Dense Linear Algebra I.65 Demmel Fa 2001 Drawbacks to Cannon °Hard to generalize for p not a perfect square A and B not square Dimensions of A, B not perfectly divisible by s=sqrt(p) A and B not “aligned” in the way they are stored on processors block-cyclic layouts °Memory hog (extra copies of local matrices)

CS267 Dense Linear Algebra I.66 Demmel Fa 2001 SUMMA = Scalable Universal Matrix Multiply Algorithm °Slightly less efficient, but simpler and easier to generalize °Presentation from van de Geijn and Watts Similar ideas appeared many times °Used in practice in PBLAS = Parallel BLAS

CS267 Dense Linear Algebra I.67 Demmel Fa 2001 SUMMA * = C(I,J) I J A(I,k) k k B(k,J) ° I, J represent all rows, columns owned by a processor ° k is a single row or column (or a block of b rows or columns) ° C(I,J) = C(I,J) +  k A(I,k)*B(k,J) ° Assume a p r by p c processor grid (p r = p c = 4 above) For k=0 to n-1 … or n/b-1 where b is the block size … = # cols in A(I,k) and # rows in B(k,J) for all I = 1 to p r … in parallel owner of A(I,k) broadcasts it to whole processor row for all J = 1 to p c … in parallel owner of B(k,J) broadcasts it to whole processor column Receive A(I,k) into Acol Receive B(k,J) into Brow C( myproc, myproc ) = C( myproc, myproc) + Acol * Brow

CS267 Dense Linear Algebra I.68 Demmel Fa 2001 SUMMA performance For k=0 to n/b-1 for all I = 1 to s … s = sqrt(p) owner of A(I,k) broadcasts it to whole processor row … time = log s *(  +  * b*n/s), using a tree for all J = 1 to s owner of B(k,J) broadcasts it to whole processor column … time = log s *(  +  * b*n/s), using a tree Receive A(I,k) into Acol Receive B(k,J) into Brow C( myproc, myproc ) = C( myproc, myproc) + Acol * Brow … time = 2*(n/s) 2 *b ° Total time = 2*n 3 /p +  * log p * n/b +  * log p * n 2 /s ° Parallel Efficiency = 1/(1 +  * log p * p / (2*b*n 2 ) +  * log p * s/(2*n) ) ° ~Same  term as Cannon, except for log p factor log p grows slowly so this is ok ° Latency (  ) term can be larger, depending on b When b=1, get  * log p * n As b grows to n/s, term shrinks to  * log p * s (log p times Cannon) ° Temporary storage grows like 2*b*n/s ° Can change b to tradeoff latency cost with memory

CS267 Dense Linear Algebra I.69 Demmel Fa 2001 Summary of Parallel Matrix Multiply Algorithms °1D Layout Bus without broadcast - slower than serial Nearest neighbor communication on a ring (or bus with broadcast): Efficiency = 1/(1 + O(p/n)) °2D Layout Cannon -Efficiency = 1/(1+O(p 1/2 /n)) -Hard to generalize for general p, n, block cyclic, alignment SUMMA -Efficiency = 1/(1 + O(log p * p / (b*n 2 ) + log p * p 1/2 /n)) -Very General -b small => less memory, lower efficiency -b large => more memory, high efficiency Gustavson et al -Efficiency = 1/(1 + O(p 1/3 /n) ) ??