1 CS 240A : Numerical Examples in Shared Memory with Cilk++ Matrix-matrix multiplication Matrix-vector multiplication Hyperobjects Thanks to Charles E.

Slides:



Advertisements
Similar presentations
Continuation of chapter 6…. Nested while loop A while loop used within another while loop is called nested while loop. Q. An illustration to generate.
Advertisements

Multi-core Computing Lecture 3 MADALGO Summer School 2012 Algorithms for Modern Parallel and Distributed Models Phillip B. Gibbons Intel Labs Pittsburgh.
© 2009 Charles E. Leiserson and Pablo Halpern1 Introduction to Cilk++ Programming PADTAD July 20, 2009 Cilk, Cilk++, Cilkview, and Cilkscreen, are trademarks.
CS 201 Compiler Construction
U NIVERSITY OF M ASSACHUSETTS, A MHERST – Department of Computer Science The Implementation of the Cilk-5 Multithreaded Language (Frigo, Leiserson, and.
1 CS 240A : Breadth-first search in Cilk++ Thanks to Charles E. Leiserson for some of these slides.
Dynamic Programming Lets begin by looking at the Fibonacci sequence.
Parallelizing C Programs Using Cilk Mahdi Javadi.
Maths for Computer Graphics
CS 240A : Examples with Cilk++
Communication [Lower] Bounds for Heterogeneous Architectures Julian Bui.
UNC Chapel Hill Lin/Manocha/Foskey Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject.
1 CS 140 : Jan 27 – Feb 3, 2010 Multicore (and Shared Memory) Programming with Cilk++ Multicore and NUMA architectures Multithreaded Programming Cilk++
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
CS 240A: Complexity Measures for Parallel Computation.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Multithreaded Algorithms Andreas Klappenecker. Motivation We have discussed serial algorithms that are suitable for running on a uniprocessor computer.
Multithreaded Programming in Cilk L ECTURE 2 Charles E. Leiserson Supercomputing Technologies Research Group Computer Science and Artificial Intelligence.
1 L ECTURE 2 Matrix Multiplication Tableau Construction Recurrences (Review) Conclusion Merge Sort.
1 CS 140 : Non-numerical Examples with Cilk++ Divide and conquer paradigm for Cilk++ Quicksort Mergesort Thanks to Charles E. Leiserson for some of these.
1 L ECTURE 2 Matrix Multiplication Tableau Construction Recurrences (Review) Conclusion Merge Sort.
Vectors and Matrices In MATLAB a vector can be defined as row vector or as a column vector. A vector of length n can be visualized as matrix of size 1xn.
Scientific Computing Interpolation – Divided Differences Efficiency and Error Analysis.
CSE 690: GPGPU Lecture 7: Matrix Multiplications Klaus Mueller Computer Science, Stony Brook University.
1 CS 140 : Feb 19, 2015 Cilk Scheduling & Applications Analyzing quicksort Optional: Master method for solving divide-and-conquer recurrences Tips on parallelism.
Defining a 2d Array A 2d array implements a MATRIX. Example: #define NUMROWS 5 #define NUMCOLS 10 int arr[NUMROWS][NUMCOLS];
WEEK 6 Class Activities Lecturer’s slides.
CDP Tutorial 3 Basics of Parallel Algorithm Design uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison.
Optimization Problems In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may.
Futures, Scheduling, and Work Distribution Companion slides for The Art of Multiprocessor Programming by Maurice Herlihy & Nir Shavit TexPoint fonts used.
Page :Algorithms in the Real World Parallelism: Lecture 1 Nested parallelism Cost model Parallel techniques and algorithms
Introduction to CUDA 1 of 2 Patrick Cozzi University of Pennsylvania CIS Fall 2014.
1 CS 240A : Divide-and-Conquer with Cilk++ Thanks to Charles E. Leiserson for some of these slides Divide & Conquer Paradigm Solving recurrences Sorting:
CS 240A: Shared Memory & Multicore Programming with Cilk++
Complexity Measures for Parallel Computation. Problem parameters: nindex of problem size pnumber of processors Algorithm parameters: t p running time.
Matrix Multiplication The Introduction. Look at the matrix sizes.
PARALLEL COMPUTATION FOR MATRIX MULTIPLICATION Presented By:Dima Ayash Kelwin Payares Tala Najem.
CS 161 Introduction to Programming and Problem Solving Chapter 17 Nested Loops Herbert G. Mayer, PSU Status 9/8/2014 Initial content copied verbatim from.
CS 450: COMPUTER GRAPHICS TRANSFORMATIONS SPRING 2015 DR. MICHAEL J. REALE.
1 Cilk Chao Huang CS498LVK. 2 Introduction A multithreaded parallel programming language Effective for exploiting dynamic, asynchronous parallelism (Chess.
Uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003.
CS 221 – May 22 Timing (sections 2.6 and 3.6) Speedup Amdahl’s law – What happens if you can’t parallelize everything Complexity Commands to put in your.
CS 140 : Numerical Examples on Shared Memory with Cilk++
Section 7: Memory and Caches
Dynamic Array Multidimensional Array Matric Operation with Array
Exploiting Parallelism
Mathematical Structures for Computer Science Chapter 6
Parallel Graph Algorithms
Array Processor.
Complexity Measures for Parallel Computation
Lecture 5 PRAM Algorithms (cont.)
Another Randomized Algorithm
Radu Rugina and Martin Rinard Laboratory for Computer Science
Multithreaded Programming in Cilk LECTURE 1
Introduction to CILK Some slides are from:
Copyright 2003, Keith D. Cooper & Linda Torczon, all rights reserved.
EMIS 8374 Shortest Path Problems: Introduction Updated 9 February 2008
Unit –VIII PRAM Algorithms.
Transactions with Nested Parallelism
Multithreaded Programming in Cilk Lecture 1
Vectors and Matrices In MATLAB a vector can be defined as row vector or as a column vector. A vector of length n can be visualized as matrix of size 1xn.
ECE 103 Engineering Programming Chapter 19 Nested Loops
Algorithms CSCI 235, Spring 2019 Lecture 28 Dynamic Programming III
Complexity Measures for Parallel Computation
Computer Architecture and Assembly Language
Another Randomized Algorithm
Introduction to CILK Some slides are from:
ENERGY 211 / CME 211 Lecture 11 October 15, 2008.
Matrix Multiplication Sec. 4.2
Presentation transcript:

1 CS 240A : Numerical Examples in Shared Memory with Cilk++ Matrix-matrix multiplication Matrix-vector multiplication Hyperobjects Thanks to Charles E. Leiserson for some of these slides

2 T P = execution time on P processors T 1 = workT ∞ = span* *Also called critical-path length or computational depth. Speedup on p processors ∙T 1 /T p Parallelism ∙T 1 /T ∞ Work and Span (Recap)

3 cilk_for (int i=0; i<n; ++i) { A[i]+=B[i]; } cilk_for (int i=0; i<n; ++i) { A[i]+=B[i]; } Vector addition Work: T 1 = Θ(n) Span: T ∞ = Parallelism: T 1 /T ∞ = Θ(n/lg n) Work: T 1 = Span: T ∞ = Θ(lg n) Parallelism: T 1 /T ∞ = Cilk Loops: Divide and Conquer G grain size Assume that G = Θ(1). Implementation

4 Square-Matrix Multiplication c 11 c 12 ⋯ c 1n c 21 c 22 ⋯ c 2n ⋮⋮⋱⋮ c n1 c n2 ⋯ c nn a 11 a 12 ⋯ a 1n a 21 a 22 ⋯ a 2n ⋮⋮⋱⋮ a n1 a n2 ⋯ a nn b 11 b 12 ⋯ b 1n b 21 b 22 ⋯ b 2n ⋮⋮⋱⋮ b n1 b n2 ⋯ b nn =· CAB c ij =  k = 1 n a ik b kj Assume for simplicity that n = 2 k.

5 Parallelizing Matrix Multiply cilk_for (int i=1; i<n; ++i) { cilk_for (int j=0; j<n; ++j) { for (int k=0; k<n; ++k { C[i][j] += A[i][k] * B[k][j]; } Θ(n 3 ) Span: T ∞ = Θ(n 2 ) Work: T 1 = Θ(n) Parallelism: T 1 /T ∞ = For 1000 × 1000 matrices, parallelism ≈ (10 3 ) 2 = 10 6.

6 Recursive Matrix Multiplication 8 multiplications of n/2 × n/2 matrices. 1 addition of n × n matrices. Divide and conquer — C 11 C 12 C 21 C 22 = · A 11 A 12 A 21 A 22 B 11 B 12 B 21 B 22 =+ A 11 B 11 A 11 B 12 A 21 B 11 A 21 B 12 A 12 B 21 A 12 B 22 A 22 B 21 A 22 B 22

7 template void MMult(T *C, T *A, T *B, int n) { T * D = new T[n*n]; //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); cilk_spawn MMult(C22, A21, B12, n/2); cilk_spawn MMult(C21, A21, B11, n/2); cilk_spawn MMult(D11, A12, B21, n/2); cilk_spawn MMult(D12, A12, B22, n/2); cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n); // C += D; } template void MMult(T *C, T *A, T *B, int n) { T * D = new T[n*n]; //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); cilk_spawn MMult(C22, A21, B12, n/2); cilk_spawn MMult(C21, A21, B11, n/2); cilk_spawn MMult(D11, A12, B21, n/2); cilk_spawn MMult(D12, A12, B22, n/2); cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n); // C += D; } D&C Matrix Multiplication Row/column length of matrices Determine submatrices by index calculation Coarsen for efficiency

8 template void MMult(T *C, T *A, T *B, int n) { T * D = new T[n*n]; //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); cilk_spawn MMult(C22, A21, B12, n/2); cilk_spawn MMult(C21, A21, B11, n/2); cilk_spawn MMult(D11, A12, B21, n/2); cilk_spawn MMult(D12, A12, B22, n/2); cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n); // C += D; } template void MMult(T *C, T *A, T *B, int n) { T * D = new T[n*n]; //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); cilk_spawn MMult(C22, A21, B12, n/2); cilk_spawn MMult(C21, A21, B11, n/2); cilk_spawn MMult(D11, A12, B21, n/2); cilk_spawn MMult(D12, A12, B22, n/2); cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n); // C += D; } Matrix Addition template void MAdd(T *C, T *D, int n) { cilk_for (int i=0; i<n; ++i) { cilk_for (int j=0; j<n; ++j) { C[n*i+j] += D[n*i+j]; } template void MAdd(T *C, T *D, int n) { cilk_for (int i=0; i<n; ++i) { cilk_for (int j=0; j<n; ++j) { C[n*i+j] += D[n*i+j]; }

9 Analysis of Matrix Addition template void MAdd(T *C, T *D, int n) { cilk_for (int i=0; i<n; ++i) { cilk_for (int j=0; j<n; ++j) { C[n*i+j] += D[n*i+j]; } template void MAdd(T *C, T *D, int n) { cilk_for (int i=0; i<n; ++i) { cilk_for (int j=0; j<n; ++j) { C[n*i+j] += D[n*i+j]; } Θ(n 2 ) Span: A ∞ (n) = Work: A 1 (n) = Θ(lg n) Nested cilk_for statements have the same Θ(lg n) span

10 Work of Matrix Multiplication template void MMult(T *C, T *A, T *B, int n) { T * D = new T [n*n]; //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); ⋮ cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n); // C += D; } template void MMult(T *C, T *A, T *B, int n) { T * D = new T [n*n]; //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); ⋮ cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n); // C += D; } 8M 1 (n/2) + A 1 (n) + Θ(1) = 8M 1 (n/2) + Θ(n 2 ) = Θ(n 3 ) CASE 1: n log b a = n log 2 8 = n 3 f(n) = Θ(n 2 ) CASE 1: n log b a = n log 2 8 = n 3 f(n) = Θ(n 2 ) Work: M 1 (n)=

11 Span of Matrix Multiplication template void MMult(T *C, T *A, T *B, int n) { T * D = new T [n*n]; //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); ⋮ cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n, size); // C += D; } template void MMult(T *C, T *A, T *B, int n) { T * D = new T [n*n]; //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); ⋮ cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n, size); // C += D; } M ∞ (n/2) + A ∞ (n) + Θ(1) = M ∞ (n/2) + Θ(lg n) = Θ(lg 2 n) CASE 2: n log b a = n log 2 1 = 1 f(n) = Θ(n log b a lg 1 n) CASE 2: n log b a = n log 2 1 = 1 f(n) = Θ(n log b a lg 1 n) maximum Span: M ∞ (n)=

12 Parallelism of Matrix Multiply M 1 (n)= Θ(n 3 )Work: M ∞ (n)= Θ(lg 2 n)Span: Parallelism: M 1 (n) M ∞ (n) = Θ(n 3 /lg 2 n) For 1000 × 1000 matrices, parallelism ≈ (10 3 ) 3 /10 2 = 10 7.

13 Stack Temporaries I DEA : Since minimizing storage tends to yield higher performance, trade off parallelism for less storage. template void MMult(T *C, T *A, T *B, int n) { //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); cilk_spawn MMult(C22, A21, B12, n/2); cilk_spawn MMult(C21, A21, B11, n/2); cilk_spawn MMult(D11, A12, B21, n/2); cilk_spawn MMult(D12, A12, B22, n/2); cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n); // C += D; } template void MMult(T *C, T *A, T *B, int n) { //base case & partition matrices cilk_spawn MMult(C11, A11, B11, n/2); cilk_spawn MMult(C12, A11, B12, n/2); cilk_spawn MMult(C22, A21, B12, n/2); cilk_spawn MMult(C21, A21, B11, n/2); cilk_spawn MMult(D11, A12, B21, n/2); cilk_spawn MMult(D12, A12, B22, n/2); cilk_spawn MMult(D22, A22, B22, n/2); MMult(D21, A22, B21, n/2); cilk_sync; MAdd(C, D, n); // C += D; } T * D = new T [n*n];

14 No-Temp Matrix Multiplication Saves space, but at what expense? // C += A*B; template void MMult2(T *C, T *A, T *B, int n) { //base case & partition matrices cilk_spawn MMult2(C11, A11, B11, n/2); cilk_spawn MMult2(C12, A11, B12, n/2); cilk_spawn MMult2(C22, A21, B12, n/2); MMult2(C21, A21, B11, n/2); cilk_sync; cilk_spawn MMult2(C11, A12, B21, n/2); cilk_spawn MMult2(C12, A12, B22, n/2); cilk_spawn MMult2(C22, A22, B22, n/2); MMult2(C21, A22, B21, n/2); cilk_sync; } // C += A*B; template void MMult2(T *C, T *A, T *B, int n) { //base case & partition matrices cilk_spawn MMult2(C11, A11, B11, n/2); cilk_spawn MMult2(C12, A11, B12, n/2); cilk_spawn MMult2(C22, A21, B12, n/2); MMult2(C21, A21, B11, n/2); cilk_sync; cilk_spawn MMult2(C11, A12, B21, n/2); cilk_spawn MMult2(C12, A12, B22, n/2); cilk_spawn MMult2(C22, A22, B22, n/2); MMult2(C21, A22, B21, n/2); cilk_sync; }

15 // C += A*B; template void MMult2(T *C, T *A, T *B, int n) { //base case & partition matrices cilk_spawn MMult2(C11, A11, B11, n/2); cilk_spawn MMult2(C12, A11, B12, n/2); cilk_spawn MMult2(C22, A21, B12, n/2); MMult2(C21, A21, B11, n/2); cilk_sync; cilk_spawn MMult2(C11, A12, B21, n/2); cilk_spawn MMult2(C12, A12, B22, n/2); cilk_spawn MMult2(C22, A22, B22, n/2); MMult2(C21, A22, B21, n/2); cilk_sync; } // C += A*B; template void MMult2(T *C, T *A, T *B, int n) { //base case & partition matrices cilk_spawn MMult2(C11, A11, B11, n/2); cilk_spawn MMult2(C12, A11, B12, n/2); cilk_spawn MMult2(C22, A21, B12, n/2); MMult2(C21, A21, B11, n/2); cilk_sync; cilk_spawn MMult2(C11, A12, B21, n/2); cilk_spawn MMult2(C12, A12, B22, n/2); cilk_spawn MMult2(C22, A22, B22, n/2); MMult2(C21, A22, B21, n/2); cilk_sync; } Work of No-Temp Multiply 8M 1 (n/2) + Θ(1) = Θ(n 3 ) CASE 1: n log b a = n log 2 8 = n 3 f(n) = Θ(1) CASE 1: n log b a = n log 2 8 = n 3 f(n) = Θ(1) Work: M 1 (n)=

16 // C += A*B; template void MMult2(T *C, T *A, T *B, int n) { //base case & partition matrices cilk_spawn MMult2(C11, A11, B11, n/2); cilk_spawn MMult2(C12, A11, B12, n/2); cilk_spawn MMult2(C22, A21, B12, n/2); MMult2(C21, A21, B11, n/2); cilk_sync; cilk_spawn MMult2(C11, A12, B21, n/2); cilk_spawn MMult2(C12, A12, B22, n/2); cilk_spawn MMult2(C22, A22, B22, n/2); MMult2(C21, A22, B21, n/2); cilk_sync; } // C += A*B; template void MMult2(T *C, T *A, T *B, int n) { //base case & partition matrices cilk_spawn MMult2(C11, A11, B11, n/2); cilk_spawn MMult2(C12, A11, B12, n/2); cilk_spawn MMult2(C22, A21, B12, n/2); MMult2(C21, A21, B11, n/2); cilk_sync; cilk_spawn MMult2(C11, A12, B21, n/2); cilk_spawn MMult2(C12, A12, B22, n/2); cilk_spawn MMult2(C22, A22, B22, n/2); MMult2(C21, A22, B21, n/2); cilk_sync; } Span of No-Temp Multiply 2M ∞ (n/2) + Θ(1) = Θ(n) CASE 1: n log b a = n log 2 2 = n f(n) = Θ(1) CASE 1: n log b a = n log 2 2 = n f(n) = Θ(1) Span: M ∞ (n)= max

17 Parallelism of No-Temp Multiply M 1 (n)= Θ(n 3 )Work: M ∞ (n)= Θ(n)Span: Parallelism: M 1 (n) M ∞ (n) = Θ(n 2 ) For 1000 × 1000 matrices, parallelism ≈ (10 3 ) 2 = Faster in practice!

18 How general that was? ∙Matrices are often rectangular ∙Even when they are square, the dimensions are hardly a power of two A B · = C m k n k Which dimension to split?

19 General Matrix Multiplication template void MMult3(T *A, T* B, T* C, int i0, int i1, int j0, int j1, int k0, int k1) { int di = i1 - i0; int dj = j1 - j0; int dk = k1 - k0; if (di >= dj && di >= dk && di >= THRESHOLD) { int mi = i0 + di / 2; MMult3 (A, B, C, i0, mi, j0, j1, k0, k1); MMult3 (A, B, C, mi, i1, j0, j1, k0, k1); } else if (dj >= dk && dj >= THRESHOLD) { int mj = j0 + dj / 2; MMult3 (A, B, C, i0, i1, j0, mj, k0, k1); MMult3 (A, B, C, i0, i1, mj, j1, k0, k1); } else if (dk >= THRESHOLD) { int mk = k0 + dk / 2; MMult3 (A, B, C, i0, i1, j0, j1, k0, mk); MMult3 (A, B, C, i0, i1, j0, j1, mk, k1); } else { // Iterative (triple-nested loop) multiply } } template void MMult3(T *A, T* B, T* C, int i0, int i1, int j0, int j1, int k0, int k1) { int di = i1 - i0; int dj = j1 - j0; int dk = k1 - k0; if (di >= dj && di >= dk && di >= THRESHOLD) { int mi = i0 + di / 2; MMult3 (A, B, C, i0, mi, j0, j1, k0, k1); MMult3 (A, B, C, mi, i1, j0, j1, k0, k1); } else if (dj >= dk && dj >= THRESHOLD) { int mj = j0 + dj / 2; MMult3 (A, B, C, i0, i1, j0, mj, k0, k1); MMult3 (A, B, C, i0, i1, mj, j1, k0, k1); } else if (dk >= THRESHOLD) { int mk = k0 + dk / 2; MMult3 (A, B, C, i0, i1, j0, j1, k0, mk); MMult3 (A, B, C, i0, i1, j0, j1, mk, k1); } else { // Iterative (triple-nested loop) multiply } } for (int i = i0; i < i1; ++i) { for (int j = j0; j < j1; ++j) { for (int k = k0; k < k1; ++k) C[i][j] += A[i][k] * B[k][j]; Split m if it is the largest Split n if it is the largest Split k if it is the largest

20 Parallelizing General MMult template void MMult3(T *A, T* B, T* C, int i0, int i1, int j0, int j1, int k0, int k1) { int di = i1 - i0; int dj = j1 - j0; int dk = k1 - k0; if (di >= dj && di >= dk && di >= THRESHOLD) { int mi = i0 + di / 2; cilk_spawn MMult3 (A, B, C, i0, mi, j0, j1, k0, k1); MMult3 (A, B, C, mi, i1, j0, j1, k0, k1); } else if (dj >= dk && dj >= THRESHOLD) { int mj = j0 + dj / 2; cilk_spawn MMult3 (A, B, C, i0, i1, j0, mj, k0, k1); MMult3 (A, B, C, i0, i1, mj, j1, k0, k1); } else if (dk >= THRESHOLD) { int mk = k0 + dk / 2; MMult3 (A, B, C, i0, i1, j0, j1, k0, mk); MMult3 (A, B, C, i0, i1, j0, j1, mk, k1); } else { // Iterative (triple-nested loop) multiply } } template void MMult3(T *A, T* B, T* C, int i0, int i1, int j0, int j1, int k0, int k1) { int di = i1 - i0; int dj = j1 - j0; int dk = k1 - k0; if (di >= dj && di >= dk && di >= THRESHOLD) { int mi = i0 + di / 2; cilk_spawn MMult3 (A, B, C, i0, mi, j0, j1, k0, k1); MMult3 (A, B, C, mi, i1, j0, j1, k0, k1); } else if (dj >= dk && dj >= THRESHOLD) { int mj = j0 + dj / 2; cilk_spawn MMult3 (A, B, C, i0, i1, j0, mj, k0, k1); MMult3 (A, B, C, i0, i1, mj, j1, k0, k1); } else if (dk >= THRESHOLD) { int mk = k0 + dk / 2; MMult3 (A, B, C, i0, i1, j0, j1, k0, mk); MMult3 (A, B, C, i0, i1, j0, j1, mk, k1); } else { // Iterative (triple-nested loop) multiply } } for (int i = i0; i < i1; ++i) { for (int j = j0; j < j1; ++j) { for (int k = k0; k < k1; ++k) C[i][j] += A[i][k] * B[k][j]; Unsafe to spawn here unless we use a temporary !

21 Split m No races, safe to spawn ! A B · = C m k n k

22 Split n No races, safe to spawn ! A B · = C m k n k

23 Split k Data races, unsafe to spawn ! A B · = C m k n k

24 Matrix-Vector Multiplication y1y1 y2y2 ⋮ ymym a 11 a 12 ⋯ a 1n a 21 a 22 ⋯ a 2n ⋮⋮⋱⋮ a m1 a m2 ⋯ a mn =· yAx yiyi =  j = 1 n a ij x j Let each worker handle a single row ! x1x1 x2x2 ⋮ xnxn

25 Matrix-Vector Multiplication template void MatVec (T **A, T* x, T* y, int m, int n) { for(int i=0; i<m; i++) { for(int j=0; j<n; j++) y[i] += A[i][j] * x[j]; } template void MatVec (T **A, T* x, T* y, int m, int n) { for(int i=0; i<m; i++) { for(int j=0; j<n; j++) y[i] += A[i][j] * x[j]; } template void MatVec (T **A, T* x, T* y, int m, int n) { cilk_for (int i=0; i<m; i++){ for(int j=0; j<n; j++) y[i] += A[i][j] * x[j]; } template void MatVec (T **A, T* x, T* y, int m, int n) { cilk_for (int i=0; i<m; i++){ for(int j=0; j<n; j++) y[i] += A[i][j] * x[j]; } Parallelize

26 Matrix-Transpose x Vector y1y1 y2y2 ⋮ ynyn a 11 a 21 ⋯ a m1 a 12 a 22 ⋯ a m2 ⋮⋮⋱⋮ a 1n a 2n ⋯ a mn =· yATAT x yiyi =  j = 1 m a ji x j The data is still A, no explicit transposition x1x1 x2x2 ⋮ xmxm

27 Matrix-Transpose x Vector template void MatTransVec (T **A, T* x, T* y, int m, int n) { cilk_for(int i=0; i<n; i++) { for(int j=0; j<m; j++) y[i] += A[j][i] * x[j]; } template void MatTransVec (T **A, T* x, T* y, int m, int n) { cilk_for(int i=0; i<n; i++) { for(int j=0; j<m; j++) y[i] += A[j][i] * x[j]; } template void MatTransVec (T **A, T* x, T* y, int m, int n) { cilk_for (int j=0; j<m; j++){ for(int i=0; i<n; i++) y[i] += A[j][i] * x[j]; } template void MatTransVec (T **A, T* x, T* y, int m, int n) { cilk_for (int j=0; j<m; j++){ for(int i=0; i<n; i++) y[i] += A[j][i] * x[j]; } Reorder loops Terrible performance (1 cache-miss/iteration) Terrible performance (1 cache-miss/iteration) Data Race !

28 Hyperobjects ∙Avoiding the data race on the variable y can be done by splitting y into multiple copies that are never accessed concurrently. ∙A hyperobject is a Cilk++ object that shows distinct views to different observers. ∙Before completing a cilk_sync, the parent’s view is reduced into the child’s view. ∙For correctness, the reduce() function should be associative (not necessarily commutative). template struct add_monoid: cilk:: monoid_base { void reduce (T * left, T * right) const { *left += *right; } … } template struct add_monoid: cilk:: monoid_base { void reduce (T * left, T * right) const { *left += *right; } … }

29 Hyperobject solution ∙Use a built-in hyperobject (there are many, read the REDUCERS chapter from the programmer’s guide) template void MatTransVec (T **A, T* x, T* y, int m, int n) { array_reducer_t art(n, y); cilk::hyperobject rvec(art); cilk_for (int j=0; j<m; j++){ T * array = rvec().array; for(int i=0; i<n; i++) array[i] += A[j][i] * x[j]; } template void MatTransVec (T **A, T* x, T* y, int m, int n) { array_reducer_t art(n, y); cilk::hyperobject rvec(art); cilk_for (int j=0; j<m; j++){ T * array = rvec().array; for(int i=0; i<n; i++) array[i] += A[j][i] * x[j]; } ∙Use hyperobjects sparingly, on infrequently accessed global variable. ∙This example is for educational purposes. There are better ways of parallelizing y=A T x.