9/12/2007CS194 Lecture1 Shared Memory Hardware: Case Study in Matrix Multiplication Kathy Yelick

Slides:



Advertisements
Similar presentations
SE-292 High Performance Computing
Advertisements

1 Lecture 4: Directory Protocols Topics: directory-based cache coherence implementations.
CS 484. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
Numerical Algorithms Matrix multiplication
CS 140 : Matrix multiplication Linear algebra problems Matrix multiplication I : cache issues Matrix multiplication II: parallel issues Thanks to Jim Demmel.
CSE5304—Project Proposal Parallel Matrix Multiplication Tian Mi.
CS 240A : Matrix multiplication Matrix multiplication I : parallel issues Matrix multiplication II: cache issues Thanks to Jim Demmel and Kathy Yelick.
Slides 8d-1 Programming with Shared Memory Specifying parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Fall 2010.
CS267 L20 Dense Linear Algebra II.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 20: Dense Linear Algebra - II James Demmel
1 Friday, October 20, 2006 “Work expands to fill the time available for its completion.” -Parkinson’s 1st Law.
Computational Astrophysics: Methodology 1.Identify astrophysical problem 2.Write down corresponding equations 3.Identify numerical algorithm 4.Find a computer.
Computer Architecture 2011 – coherency & consistency (lec 7) 1 Computer Architecture Memory Coherency & Consistency By Dan Tsafrir, 11/4/2011 Presentation.
02/21/2007CS267 Lecture DLA11 CS 267 Dense Linear Algebra: Parallel Matrix Multiplication James Demmel
CS267 Dense Linear Algebra I.1 Demmel Fa 2002 CS 267 Applications of Parallel Computers Dense Linear Algebra James Demmel
02/09/2006CS267 Lecture 81 CS 267 Dense Linear Algebra: Parallel Matrix Multiplication James Demmel
CS267 L2 Memory Hierarchies.1 Demmel Sp 1999 CS 267 Applications of Parallel Computers Lecture 2: Memory Hierarchies and Optimizing Matrix Multiplication.
CS 584. Dense Matrix Algorithms There are two types of Matrices Dense (Full) Sparse We will consider matrices that are Dense Square.
CS267 Dense Linear Algebra I.1 Demmel Fa 2001 CS 267 Applications of Parallel Computers Dense Linear Algebra James Demmel
Dense Matrix Algorithms CS 524 – High-Performance Computing.
CPE 731 Advanced Computer Architecture Snooping Cache Multiprocessors Dr. Gheith Abandah Adapted from the slides of Prof. David Patterson, University of.
Lecture 37: Chapter 7: Multiprocessors Today’s topic –Introduction to multiprocessors –Parallelism in software –Memory organization –Cache coherence 1.
Multiprocessor Cache Coherency
Memory Consistency Models Some material borrowed from Sarita Adve’s (UIUC) tutorial on memory consistency models.
2/25/2009CS267 Lecture 101 Parallelism and Locality in Matrix Computations Dense Linear Algebra: Optimizing Parallel.
High Performance Computing 1 Numerical Linear Algebra An Introduction.
Graduate Computer Architecture I Lecture 10: Shared Memory Multiprocessors Young Cho.
1 Minimizing Communication in Numerical Linear Algebra Case Study: Matrix Multiply Jim Demmel EECS & Math Departments, UC Berkeley.
Computer Architecture 2015 – Cache Coherency & Consistency 1 Computer Architecture Memory Coherency & Consistency By Yoav Etsion and Dan Tsafrir Presentation.
1 High-Performance Grid Computing and Research Networking Presented by Xing Hang Instructor: S. Masoud Sadjadi
CS 140 : Matrix multiplication Warmup: Matrix times vector: communication volume Matrix multiplication I: parallel issues Matrix multiplication II: cache.
Dense Linear Algebra Sathish Vadhiyar. Gaussian Elimination - Review Version 1 for each column i zero it out below the diagonal by adding multiples of.
CALTECH cs184c Spring DeHon CS184c: Computer Architecture [Parallel and Multithreaded] Day 8: April 26, 2001 Simultaneous Multi-Threading (SMT)
EECS 252 Graduate Computer Architecture Lec 13 – Snooping Cache and Directory Based Multiprocessors David Patterson Electrical Engineering and Computer.
Ch4. Multiprocessors & Thread-Level Parallelism 2. SMP (Symmetric shared-memory Multiprocessors) ECE468/562 Advanced Computer Architecture Prof. Honggang.
Memory Consistency Models Alistair Rendell See “Shared Memory Consistency Models: A Tutorial”, S.V. Adve and K. Gharachorloo Chapter 8 pp of Wilkinson.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Matrix Multiply Methods. Some general facts about matmul High computation to communication hides multitude of sins Many “symmetries” meaning that the.
3/02/2010CS267 Lecture 131 CS 267 Dense Linear Algebra: History and Structure, Parallel Matrix Multiplication James Demmel
2/25/2009CS267 Lecture 101 CS 267 Dense Linear Algebra: History and Structure, Parallel Matrix Multiplication James Demmel
Dense Linear Algebra Sathish Vadhiyar. Gaussian Elimination - Review Version 1 for each column i zero it out below the diagonal by adding multiples of.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Lecture 9 Architecture Independent (MPI) Algorithm Design
CS267 Lecture 61 Shared Memory Hardware and Memory Consistency Modified from J. Demmel and K. Yelick
09/13/2012CS4230 CS4230 Parallel Programming Lecture 8: Dense Linear Algebra and Locality Optimizations Mary Hall September 13,
1 Programming with Shared Memory - 3 Recognizing parallelism Performance issues ITCS4145/5145, Parallel Programming B. Wilkinson Jan 22, 2016.
CSC/ECE 506: Architecture of Parallel Computers Bus-Based Coherent Multiprocessors 1 Lecture 12 (Chapter 8) Lecture 12 (Chapter 8)
1 Computer Architecture & Assembly Language Spring 2001 Dr. Richard Spillman Lecture 26 – Alternative Architectures.
Numerical Algorithms Chapter 11.
Optimizing Cache Performance in Matrix Multiplication
Lecture 18: Coherence and Synchronization
Multiprocessor Cache Coherency
Optimizing Cache Performance in Matrix Multiplication
CS 267 Dense Linear Algebra: Parallel Matrix Multiplication
BLAS: behind the scenes
CMSC 611: Advanced Computer Architecture
Example Cache Coherence Problem
Flynn’s Taxonomy Flynn classified by data and control streams in 1966
Kathy Yelick CS 267 Applications of Parallel Processors Lecture 13: Parallel Matrix Multiply Kathy Yelick
CS 140 : Matrix multiplication
Parallel Matrix Operations
Programming with Shared Memory Specifying parallelism
Programming with Shared Memory - 3 Recognizing parallelism
Programming with Shared Memory Specifying parallelism
To accompany the text “Introduction to Parallel Computing”,
Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as ai,j and elements of B as.
Lecture 19: Coherence and Synchronization
Lecture 18: Coherence and Synchronization
CS 140 : Matrix multiplication
Parallel Matrix Multiply
Presentation transcript:

9/12/2007CS194 Lecture1 Shared Memory Hardware: Case Study in Matrix Multiplication Kathy Yelick

9/12/2007CS194 Lecture2 Basic Shared Memory Architecture Processors all connected to a large shared memory Where are caches? Now take a closer look at structure, costs, limits, programming P1 interconnect memory P2 Pn

9/12/2007CS194 Lecture3 Intuitive Memory Model Reading an address should return the last value written to that address Easy in uniprocessors except for I/O Cache coherence problem in MPs is more pervasive and more performance critical More formally, this is called sequential consistency: “ A multiprocessor is sequentially consistent if the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.” [Lamport, 1979]

9/12/2007CS194 Lecture4 Sequential Consistency Intuition Sequential consistency says the machine behaves as if it does the following memory P0P1P2P3

9/12/2007CS194 Lecture5 Memory Consistency Semantics What does this imply about program behavior? No process ever sees “garbage” values, I.e., average of 2 values Processors always see values written by some some processor The value seen is constrained by program order on all processors Time always moves forward Example: spin lock P1 writes data=1, then writes flag=1 P2 waits until flag=1, then reads data initially: flag=0 data=0 data = 1 flag = 1 10: if flag=0, goto 10 …= data If P2 sees the new value of flag (=1), it must see the new value of data (=1) P1 P2 If P2 reads flag Then P2 may read data

9/12/2007CS194 Lecture6 If Caches are Not “Coherent” Coherence means different copies of same location have same value, incoherent otherwise: p1 and p2 both have cached copies of data (= 0) p1 writes data=1 May “write through” to memory p2 reads data, but gets the “stale” cached copy This may happen even if it read an updated value of another variable, flag, that came from memory data 0 data = 0 p1p2 data 1

9/12/2007CS194 Lecture7 Snoopy Cache-Coherence Protocols Memory bus is a broadcast medium Caches contain information on which addresses they store Cache Controller “snoops” all transactions on the bus A transaction is a relevant transaction if it involves a cache block currently contained in this cache Take action to ensure coherence invalidate, update, or supply value Many possible designs (see CS252 or CS258) State Address Data P0 $ $ Pn Mem memory bus memory op from Pn bus snoop

9/12/2007CS194 Lecture8 Limits of Bus-Based Shared Memory I/OMEM ° ° ° PROC cache PROC cache ° ° ° Assume: 1 GHz processor w/o cache => 4 GB/s inst BW per processor (32-bit) => 1.2 GB/s data BW at 30% load-store Suppose 98% inst hit rate and 95% data hit rate => 80 MB/s inst BW per processor => 60 MB/s data BW per processor  140 MB/s combined BW Assuming 1 GB/s bus bandwidth  8 processors will saturate bus 5.2 GB/s 140 MB/s

9/12/2007CS194 Lecture9 Basic Choices in Memory/Cache Coherence Keep Directory to keep track of which memory stores latest copy of data Directory, like cache, may keep information such as: Valid/invalid Dirty (inconsistent with memory) Shared (in another caches) When a processor executes a write operation to shared data, basic design choices are: With respect to memory: Write through cache: do the write in memory as well as cache Write back cache: wait and do the write later, when the item is flushed With respect to other cached copies Update: give all other processors the new value Invalidate: all other processors remove from cache See CS252 or CS258 for details

9/12/2007CS194 Lecture10 Review of the BLAS BLAS levelEx.# mem refs# flopsq 1“Axpy”, Dot prod 3n2n 1 2/3 2Matrix- vector mult n2n2 2n 2 2 3Matrix- matrix mult 4n 2 2n 3 n/2 Building blocks for all linear algebra Parallel versions call serial versions on each processor So they must be fast! Recall q = # flops / # mem refs The larger is q, the faster the algorithm can go in the presence of memory hierarchy “axpy”: y =  *x + y, where  scalar, x and y vectors

9/12/2007CS194 Lecture11 Different Parallel Data Layouts for Matrices Why? Want parallelism within submatrices ) 1D Column Blocked Layout2) 1D Column Cyclic Layout 3) 1D Column Block Cyclic Layout 4) Row versions of the previous layouts Generalizes others ) 2D Row and Column Block Cyclic Layout ) 2D Row and Column Blocked Layout b

9/12/2007CS194 Lecture12 Parallel Matrix-Vector Product Compute y = y + A*x, where A is a dense matrix Layout: 1D row blocked A(i) refers to the n by n/p block row that processor i owns, x(i) and y(i) similarly refer to segments of x,y owned by i Algorithm: Foreach processor i Broadcast x(i) Compute y(i) = A(i)*x Algorithm uses the formula y(i) = y(i) + A(i)*x = y(i) +  j A(i,j)*x(j) x y P0 P1 P2 P3 P0 P1 P2 P3

9/12/2007CS194 Lecture13 Matrix-Vector Product y = y + A*x A column layout of the matrix eliminates the broadcast of x But adds a reduction to update the destination y A 2D blocked layout uses a broadcast (of x(j) in a processor column) and reduction (of A(i,j)*x(j) in a processor row), sqrt(p) by sqrt(p) for square processor grid P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P0 P4 P8 P12

9/12/2007CS194 Lecture14 Matrix-Vector Product y = y + A*x A column layout of the matrix eliminates the broadcast of x But adds a reduction to update the destination y A 2D blocked layout uses a broadcast (of x(j) in a processor column) and reduction (of A(i,j)*x(j) in a processor row), sqrt(p) by sqrt(p) for square processor grid P0 P5 P10 P15 P0 P1 P2 P3 P4 P5 P6 P7 P8 P9 P10 P11 P12 P13 P14 P15 P0 P5 P10 P15 It may be useful to have x(i) and y(i) on same proc

9/12/2007CS194 Lecture15 Parallel Matrix Multiply Computing C=C+A*B Using basic algorithm: 2*n 3 Flops Variables are: Data layout Topology of machine Scheduling communication Use of performance models for algorithm design Message Time = “latency” + #words * time-per-word =  + n*   and  measured in #flop times (i.e. time per flop = 1) Efficiency (in any model): serial time / (p * parallel time) perfect (linear) speedup  efficiency = 1

9/12/2007CS194 Lecture16 Matrix Multiply with 1D Column Layout Assume matrices are n x n and n is divisible by p A(i) refers to the n by n/p block column that processor i owns (similiarly for B(i) and C(i)) B(i,j) is the n/p by n/p sublock of B(i) in rows j*n/p through (j+1)*n/p Algorithm uses the formula C(i) = C(i) + A*B(i) = C(i) +  j A(j)*B(j,i) p0p1p2p3p5p4p6p7 May be a reasonable assumption for analysis, not for code

9/12/2007CS194 Lecture17 Matrix Multiply: 1D Layout on Bus or Ring Algorithm uses the formula C(i) = C(i) + A*B(i) = C(i) +  j A(j)*B(j,i) First, consider a bus-connected machine without broadcast: only one pair of processors can communicate at a time (ethernet) Second, consider a machine with processors on a ring: all processors may communicate with nearest neighbors simultaneously

9/12/2007CS194 Lecture18 MatMul: 1D layout on Bus without Broadcast Naïve algorithm: C(myproc) = C(myproc) + A(myproc)*B(myproc,myproc) for i = 0 to p-1 … for each block column A(i) of A for j = 0 to p-1 except i … for every processor not having A(i) if (myproc == i) send A(i) to processor j if (myproc == j) receive A(i) from processor i C(myproc) = C(myproc) + A(i)*B(i,myproc) barrier Cost of inner loop: computation: 2*n*(n/p) 2 = 2*n 3 /p 2 communication:  +  *n 2 /p

9/12/2007CS194 Lecture19 Naïve MatMul (continued) Cost of inner loop: computation: 2*n*(n/p) 2 = 2*n 3 /p 2 communication:  +  *n 2 /p … approximately Only 1 pair of processors (i and j) are active on any iteration, and of those, only i is doing computation => the algorithm is almost entirely serial Running time: = (p*(p-1) + 1)*computation + p*(p-1)*communication ~= 2*n 3 + p 2 *  + p*n 2 *  This is worse than the serial time and grows with p. Why might you still want to do this?

9/12/2007CS194 Lecture20 Matmul for 1D layout on a Processor Ring Pairs of processors can communicate simultaneously Copy A(myproc) into Tmp C(myproc) = C(myproc) + Tmp*B(myproc, myproc) for j = 1 to p-1 Send Tmp to processor myproc+1 mod p Receive Tmp from processor myproc-1 mod p C(myproc) = C(myproc) + Tmp*B( myproc-j mod p, myproc) Same idea as for gravity in simple sharks and fish algorithm May want double buffering in practice for overlap Ignoring deadlock details in code Time of inner loop = 2*(  +  *n 2 /p) + 2*n*(n/p) 2

9/12/2007CS194 Lecture21 Matmul for 1D layout on a Processor Ring Time of inner loop = 2*(  +  *n 2 /p) + 2*n*(n/p) 2 Total Time = 2*n* (n/p) 2 + (p-1) * Time of inner loop ~ 2*n 3 /p + 2*p*  + 2*  *n 2 Optimal for 1D layout on Ring or Bus, even with with Broadcast: Perfect speedup for arithmetic A(myproc) must move to each other processor, costs at least (p-1)*cost of sending n*(n/p) words Parallel Efficiency = 2*n 3 / (p * Total Time) = 1/(1 +  * p 2 /(2*n 3 ) +  * p/(2*n) ) = 1/ (1 + O(p/n)) Grows to 1 as n/p increases (or  and  shrink)

9/12/2007CS194 Lecture22 MatMul with 2D Layout Consider processors in 2D grid (physical or logical) Processors can communicate with 4 nearest neighbors Broadcast along rows and columns Assume p processors form square s x s grid p(0,0) p(0,1) p(0,2) p(1,0) p(1,1) p(1,2) p(2,0) p(2,1) p(2,2) p(0,0) p(0,1) p(0,2) p(1,0) p(1,1) p(1,2) p(2,0) p(2,1) p(2,2) p(0,0) p(0,1) p(0,2) p(1,0) p(1,1) p(1,2) p(2,0) p(2,1) p(2,2) =*

9/12/2007CS194 Lecture23 Cannon’s Algorithm … C(i,j) = C(i,j) +  A(i,k)*B(k,j) … assume s = sqrt(p) is an integer forall i=0 to s-1 … “skew” A left-circular-shift row i of A by i … so that A(i,j) overwritten by A(i,(j+i)mod s) forall i=0 to s-1 … “skew” B up-circular-shift column i of B by i … so that B(i,j) overwritten by B((i+j)mod s), j) for k=0 to s-1 … sequential forall i=0 to s-1 and j=0 to s-1 … all processors in parallel C(i,j) = C(i,j) + A(i,j)*B(i,j) left-circular-shift each row of A by 1 up-circular-shift each column of B by 1 k

9/12/2007CS194 Lecture24 C(1,2) = A(1,0) * B(0,2) + A(1,1) * B(1,2) + A(1,2) * B(2,2) Cannon’s Matrix Multiplication

9/12/2007CS194 Lecture25 Initial Step to Skew Matrices in Cannon Initial blocked input After skewing A(1,0) A(2,0) A(0,1)A(0,2) A(1,1) A(2,1) A(1,2) A(2,2) A(0,0) B(0,1)B(0,2) B(1,0) B(2,0) B(1,1)B(1,2) B(2,1)B(2,2) B(0,0) A(1,0) A(2,0) A(0,1)A(0,2) A(1,1) A(2,1) A(1,2) A(2,2) A(0,0) B(0,1) B(0,2)B(1,0) B(2,0) B(1,1) B(1,2) B(2,1) B(2,2)B(0,0)

9/12/2007CS194 Lecture26 Shifting Steps in Cannon First step Second Third A(1,0) A(2,0) A(0,1) A(0,2) A(1,1) A(2,1) A(1,2) A(2,2) A(0,0) B(0,1) B(0,2)B(1,0) B(2,0) B(1,1) B(1,2) B(2,1) B(2,2)B(0,0) A(1,0) A(2,0) A(0,1) A(0,2) A(2,1) A(1,2) B(0,1) B(0,2)B(1,0) B(2,0) B(1,1) B(1,2) B(2,1) B(2,2)B(0,0) A(1,0) A(2,0) A(0,1) A(0,2) A(1,1) A(2,1) A(1,2) A(2,2) A(0,0)B(0,1) B(0,2)B(1,0) B(2,0) B(1,1) B(1,2) B(2,1) B(2,2)B(0,0) A(1,1) A(2,2) A(0,0)

9/12/2007CS194 Lecture27 Cost of Cannon’s Algorithm forall i=0 to s-1 … recall s = sqrt(p) left-circular-shift row i of A by i … cost = s*(  +  *n 2 /p) forall i=0 to s-1 up-circular-shift column i of B by i … cost = s*(  +  *n 2 /p) for k=0 to s-1 …sequential loop forall i=0 to s-1 and j=0 to s-1 C(i,j) = C(i,j) + A(i,j)*B(i,j) … cost = 2*(n/s) 3 = 2*n 3 /p 3/2 left-circular-shift each row of A by 1 … cost =  +  *n 2 /p up-circular-shift each column of B by 1 … cost =  +  *n 2 /p ° Total Time = 2*n 3 /p + 4 * s*  + 4*  *n 2 /s ° Parallel Efficiency = 2*n 3 / (p * Total Time) = 1/( 1 +  * 2*(s/n) 3 +  * 2*(s/n) ) = 1/(1 + O(sqrt(p)/n)) ° Grows to 1 as n/s = n/sqrt(p) = sqrt(data per processor) grows ° Better than 1D layout, which had Efficiency = 1/(1 + O(p/n))

9/12/2007CS194 Lecture28 Pros and Cons of Cannon Local computation one call to (optimized) matrix-multiply Hard to generalize for p not a perfect square A and B not square Dimensions of A, B not perfectly divisible by s=sqrt(p) A and B not “aligned” in the way they are stored on processors block-cyclic layouts Memory hog (extra copies of local matrices)

9/12/2007CS194 Lecture29 SUMMA Algorithm SUMMA = Scalable Universal Matrix Multiply Slightly less efficient, but simpler and easier to generalize Presentation from van de Geijn and Watts Similar ideas appeared many times Used in practice in PBLAS = Parallel BLAS

9/12/2007CS194 Lecture30 SUMMA * = i j A(i,k) k k B(k,j) i, j represent all rows, columns owned by a processor k is a single row or column or a block of b rows or columns C(i,j) = C(i,j) +  k A(i,k) * B(k,j) Assume a p r by p c processor grid (p r = p c = 4 above) Need not be square C(i,j)

9/12/2007CS194 Lecture31 SUMMA For k=0 to n-1 … or n/b-1 where b is the block size … = # cols in A(i,k) and # rows in B(k,j) for all i = 1 to p r … in parallel owner of A(i,k) broadcasts it to whole processor row for all j = 1 to p c … in parallel owner of B(k,j) broadcasts it to whole processor column Receive A(i,k) into Acol Receive B(k,j) into Brow C_myproc = C_myproc + Acol * Brow * = i j A(i,k) k k B(k,j) C(i,j)

9/12/2007CS194 Lecture32 SUMMA performance For k=0 to n/b-1 for all i = 1 to s … s = sqrt(p) owner of A(i,k) broadcasts it to whole processor row … time = log s *(  +  * b*n/s), using a tree for all j = 1 to s owner of B(k,j) broadcasts it to whole processor column … time = log s *(  +  * b*n/s), using a tree Receive A(i,k) into Acol Receive B(k,j) into Brow C_myproc = C_myproc + Acol * Brow … time = 2*(n/s) 2 *b °Total time = 2*n 3 /p +  * log p * n/b +  * log p * n 2 /s °To simplify analysis only, assume s = sqrt(p)

9/12/2007CS194 Lecture33 SUMMA performance Total time = 2*n 3 /p +  * log p * n/b +  * log p * n 2 /s Parallel Efficiency = 1/(1 +  * log p * p / (2*b*n 2 ) +  * log p * s/(2*n) ) ~Same  term as Cannon, except for log p factor log p grows slowly so this is ok Latency (  ) term can be larger, depending on b When b=1, get  * log p * n As b grows to n/s, term shrinks to  * log p * s (log p times Cannon) Temporary storage grows like 2*b*n/s Can change b to tradeoff latency cost with memory

9/12/2007CS194 Lecture34 ScaLAPACK Parallel Library

9/12/2007CS194 Lecture35 PDGEMM = PBLAS routine for matrix multiply Observations: For fixed N, as P increases Mflops increases, but less than 100% efficiency For fixed P, as N increases, Mflops (efficiency) rises DGEMM = BLAS routine for matrix multiply Maximum speed for PDGEMM = # Procs * speed of DGEMM Observations (same as above): Efficiency always at least 48% For fixed N, as P increases, efficiency drops For fixed P, as N increases, efficiency increases

9/12/2007CS194 Lecture36 Recursive Layouts For both cache hierarchies and parallelism, recursive layouts may be useful Z-Morton, U-Morton, and X-Morton Layout Also Hilbert layout and others What about the user’s view? Some problems can be solved on a permutation May not need to actually change the user’s layout Or: convert on input/output, invisibly to user

9/12/2007CS194 Lecture37 Summary of Parallel Matrix Multiplication 1D Layout Bus without broadcast - slower than serial Nearest neighbor communication on a ring (or bus with broadcast): Efficiency = 1/(1 + O(p/n)) 2D Layout Cannon Efficiency = 1/(1+O(  sqrt(p) /n    +  * sqrt(p) /n)) Hard to generalize for general p, n, block cyclic, alignment SUMMA Efficiency = 1/(1 + O(  log p * p / (b*n 2 ) +  log p * sqrt(p) /n)) Very General b small => less memory, lower efficiency b large => more memory, high efficiency Recursive layouts Current area of research