Mattan Erez The University of Texas at Austin

Slides:



Advertisements
Similar presentations
Parallel Programming Patterns Eun-Gyu Kim June 10, 2004.
Advertisements

Lecture 3: Parallel Algorithm Design
CISC 879 : Software Support for Multicore Architectures John Cavazos Dept of Computer & Information Sciences University of Delaware
Lecture 8 – Collective Pattern Collectives Pattern Parallel Computing CIS 410/510 Department of Computer and Information Science.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Structuring Parallel Algorithms.
Analysis and Performance Results of a Molecular Modeling Application on Merrimac Erez, et al. Stanford University 2004 Presented By: Daniel Killebrew.
CISC 879 : Software Support for Multicore Architectures John Cavazos Dept of Computer & Information Sciences University of Delaware
© David Kirk/NVIDIA and Wen-mei W. Hwu Taiwan, June 30 – July 2, Taiwan 2008 CUDA Course Programming Massively Parallel Processors: the CUDA experience.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 11 Parallel.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Introduction to CUDA Programming Scans Andreas Moshovos Winter 2009 Based on slides from: Wen Mei Hwu (UIUC) and David Kirk (NVIDIA) White Paper/Slides.
© David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al, University of Illinois, ECE408 Applied Parallel Programming Lecture 12 Parallel.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 12: Application Lessons When the tires.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 Control Flow/ Thread Execution.
Parallelization of likelihood functions for data analysis Alfio Lazzaro CERN openlab Forum on Concurrent Programming Models and Frameworks.
CS 484 Designing Parallel Algorithms Designing a parallel algorithm is not easy. There is no recipe or magical ingredient Except creativity We can benefit.
© David Kirk/NVIDIA, Wen-mei W. Hwu, and John Stratton, ECE 498AL, University of Illinois, Urbana-Champaign 1 CUDA Lecture 7: Reductions and.
CS 193G Lecture 5: Parallel Patterns I. Getting out of the trenches So far, we’ve concerned ourselves with low-level details of kernel programming Mapping.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE408/CS483/498AL, University of Illinois, Urbana-Champaign 1 ECE 408/CS483 Applied Parallel Programming.
© David Kirk/NVIDIA and Wen-mei W. Hwu Urbana, Illinois, August 10-14, VSCSE Summer School 2009 Many-core Processors for Science and Engineering.
© David Kirk/NVIDIA and Wen-mei W. Hwu, University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 11 Parallel Computation.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 13: Application Lessons When the tires.
© David Kirk/NVIDIA and Wen-mei W. Hwu University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 10 Reduction Trees.
© David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Lecture 15: Basic Parallel Programming Concepts.
© David Kirk/NVIDIA and Wen-mei W. Hwu, ECE 498AL, University of Illinois, Urbana-Champaign 1 ECE 498AL Spring 2010 Programming Massively Parallel.
A Pattern Language for Parallel Programming Beverly Sanders University of Florida.
GPGPU: Parallel Reduction and Scan Joseph Kider University of Pennsylvania CIS Fall 2011 Credit: Patrick Cozzi, Mark Harris Suresh Venkatensuramenan.
Parallel Computing Chapter 3 - Patterns R. HALVERSON MIDWESTERN STATE UNIVERSITY 1.
Parallel primitives – Scan operation CDP – Written by Uri Verner 1 GPU Algorithm Design.
© David Kirk/NVIDIA and Wen-mei W. Hwu, University of Illinois, CS/EE 217 GPU Architecture and Parallel Programming Lecture 12 Parallel Computation.
Parallel Patterns.
Lecture 3: Parallel Algorithm Design
Top 50 Data Structures Interview Questions
Decrease-and-Conquer Approach
ECE 498AL Lectures 8: Bank Conflicts and Sample PTX code
DATA STRUCTURES AND OBJECT ORIENTED PROGRAMMING IN C++
Parallel Programming Patterns
ECE408 Fall 2015 Applied Parallel Programming Lecture 11 Parallel Computation Patterns – Reduction Trees © David Kirk/NVIDIA and Wen-mei W. Hwu ECE408/CS483/ECE498al,
ECE408 Fall 2015 Applied Parallel Programming Lecture 21: Application Case Study – Molecular Dynamics.
Parallel Sorting Algorithms
Mattan Erez The University of Texas at Austin
Mattan Erez The University of Texas at Austin
Introduction to CUDA Programming
L18: CUDA, cont. Memory Hierarchy and Examples
Mattan Erez The University of Texas at Austin
ECE408 / CS483 Applied Parallel Programming Lecture 23: Application Case Study – Electrostatic Potential Calculation.
Unit-2 Divide and Conquer
Mattan Erez The University of Texas at Austin
Parallel Computation Patterns (Scan)
GPGPU: Parallel Reduction and Scan
Mattan Erez The University of Texas at Austin
Parallel Computation Patterns (Reduction)
Mattan Erez The University of Texas at Austin
Parallel Sorting Algorithms
Programming Massively Parallel Processors Lecture Slides for Chapter 9: Application Case Study – Electrostatic Potential Calculation © David Kirk/NVIDIA.
ECE408 Applied Parallel Programming Lecture 14 Parallel Computation Patterns – Parallel Prefix Sum (Scan) Part-2 © David Kirk/NVIDIA and Wen-mei W.
ECE 498AL Lecture 15: Reductions and Their Implementation
Mattan Erez The University of Texas at Austin
Mattan Erez The University of Texas at Austin
Parallel build blocks.
Mattan Erez The University of Texas at Austin
ECE 498AL Lecture 10: Control Flow
Mattan Erez The University of Texas at Austin
Mattan Erez The University of Texas at Austin
ECE 498AL Spring 2010 Lecture 10: Control Flow
© David Kirk/NVIDIA and Wen-mei W. Hwu,
Parallel Programming in C with MPI and OpenMP
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Mattan Erez The University of Texas at Austin
Presentation transcript:

Mattan Erez The University of Texas at Austin EE382N (20): Computer Architecture - Parallelism and Locality Fall 2011 Lecture 14 – Parallelism in Software V Mattan Erez The University of Texas at Austin EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Quick recap Decomposition Algorithm structure Supporting structures High-level and fairly abstract Consider machine scale for the most part Task, Data, Pipeline Find dependencies Algorithm structure Still abstract, but a bit less so Consider communication, sync, and bookkeeping Task (collection/recursive) Data (geometric/recursive) Dataflow (pipeline/event- based-coordination) Supporting structures Loop Master/worker Fork/join SPMD MapReduce EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Algorithm Structure and Organization (my view) Task parallelism Divide and conquer Geometric decomposition Recursive data Pipeline Event-based coordination SPMD **** ** * Loop Parallelism **** when no dependencies **** SWP to hide comm. Master/ Worker *** Fork/ Join Patterns can be hierarchically composed so that a program uses more than one pattern EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Patterns for Parallelizing Programs 4 Design Spaces Algorithm Expression Finding Concurrency Expose concurrent tasks Algorithm Structure Map tasks to processes to exploit parallel architecture Software Construction Supporting Structures Code and data structuring patterns Implementation Mechanisms Low level mechanisms used to write parallel programs Patterns for Parallel Programming. Mattson, Sanders, and Massingill (2005). EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

ILP, DLP, and TLP in SW and HW OOO Dataflow VLIW DLP SIMD Vector TLP Essentially multiple cores with multiple sequencers ILP Within straight-line code DLP Parallel loops Tasks operating on disjoint data No dependencies within parallelism phase TLP All of DLP + Producer-consumer chains EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

ILP, DLP, and TLP and Supporting Patterns Task parallelism Divide and conquer Geometric decomposition Recursive data Pipeline Event-based coordination ILP inline / unroll inline unroll DLP natural or local- conditions after enough divisions natural after enough branches difficult local- conditions TLP EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

ILP, DLP, and TLP and Supporting Patterns Task parallelism Divide and conquer Geometric decomposition Recursive data Pipeline Event-based coordination ILP inline / unroll inline unroll DLP natural or local- conditions after enough divisions natural after enough branches difficult local- conditions TLP EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

ILP, DLP, and TLP and Implementation Patterns SPMD Loop Parallelism Mater/Worker Fork/Join ILP pipeline unroll inline DLP natural or local- conditional natural local-conditional after enough divisions + local-conditional TLP EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

ILP, DLP, and TLP and Implementation Patterns SPMD Loop Parallelism Master/ Worker Fork/Join ILP pipeline unroll inline DLP natural or local- conditional natural local-conditional after enough divisions + local-conditional TLP EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Outline Molecular dynamics example Problem description Steps to solution Build data structures; Compute forces; Integrate for new; positions; Check global solution; Repeat Finding concurrency Scans; data decomposition; reductions Algorithm structure Supporting structures EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Credits Parallel Scan slides courtesy David Kirk (NVIDIA) and Wen-Mei Hwu (UIUC) Taken from EE493-AI taught at UIUC in Sprig 2007 Reduction slides courtesy Dr. Rodric Rabbah (IBM) Taken from 6.189 IAP taught at MIT in 2007 EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

GROMACS Highly optimized molecular-dynamics package Popular code Specifically tuned for protein folding Hand optimized loops for SSE3 (and other extensions) EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Gromacs Components Non-bonded forces Bonded forces Boundary conditions Water-water with cutoff Protein-protein tabulated Water-water tabulated Protein-water tabulated Bonded forces Angles Dihedrals Boundary conditions Verlet integrator Constraints SHAKE SETTLE Other Temperature–pressure coupling Virial calculation EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

GROMACS Water-Water Force Calculation Non-bonded long-range interactions Coulomb Lennard-Jones 234 operations per interaction Water-water interaction ~75% of GROMACS run-time EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

GROMACS Uses Non-Trivial Neighbor-List Algorithm Full non-bonded force calculation is o(n2) GROMACS approximates with a cutoff Molecules located more than rc apart do not interact O(nrc3) Efficient algorithm leads to variable rate input streams EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

GROMACS Uses Non-Trivial Neighbor-List Algorithm Full non-bonded force calculation is o(n2) GROMACS approximates with a cutoff Molecules located more than rc apart do not interact O(nrc3) central molecules neighbor molecules Efficient algorithm leads to variable rate input streams EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

GROMACS Uses Non-Trivial Neighbor-List Algorithm Full non-bonded force calculation is o(n2) GROMACS approximates with a cutoff Molecules located more than rc apart do not interact O(nrc3) central molecules neighbor molecules Efficient algorithm leads to variable rate input streams EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

GROMACS Uses Non-Trivial Neighbor-List Algorithm Full non-bonded force calculation is o(n2) GROMACS approximates with a cutoff Molecules located more than rc apart do not interact O(nrc3) central molecules neighbor molecules Efficient algorithm leads to variable rate input streams EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

GROMACS Uses Non-Trivial Neighbor-List Algorithm Full non-bonded force calculation is o(n2) GROMACS approximates with a cutoff Molecules located more than rc apart do not interact O(nrc3) Separate neighbor-list for each molecule Neighbor-lists have variable number of elements central molecules neighbor molecules Efficient algorithm leads to variable rate input streams EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Parallel Prefix Sum (Scan) Definition: The all-prefix-sums operation takes a binary associative operator  with identity I, and an array of n elements [a0, a1, …, an-1] and returns the ordered set [I, a0, (a0  a1), …, (a0  a1  …  an-2)]. Example: if  is addition, then scan on the set [3 1 7 0 4 1 6 3] returns the set [0 3 4 11 11 15 16 22] Exclusive scan: last input element is not included in the result © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign (From Blelloch, 1990, “Prefix Sums and Their Applications) EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Applications of Scan Scan is a simple and useful parallel building block Convert recurrences from sequential : for(j=1;j<n;j++) out[j] = out[j-1] + f(j); into parallel: forall(j) { temp[j] = f(j) }; scan(out, temp); Useful for many parallel algorithms: radix sort quicksort String comparison Lexical analysis Stream compaction Polynomial evaluation Solving recurrences Tree operations Building data structures Etc. © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Building Data Structures with Scans Fun on the board EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Scan on a serial CPU void scan( float* scanned, float* input, int length) { scanned[0] = 0; for(int i = 1; i < length; ++i) scanned[i] = input[i-1] + scanned[i-1]; } Just add each element to the sum of the elements before it Trivial, but sequential Exactly n adds: optimal © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

A First-Attempt Parallel Scan Algorithm Read input to shared memory. Set first element to zero and shift others right by one. In 3 1 7 4 6 T0 3 1 7 4 6 Each UE reads one value from the input array in device memory into shared memory array T0. UE 0 writes 0 into shared memory array. © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

A First-Attempt Parallel Scan Algorithm (previous slide) Iterate log(n) times: UEs stride to n: Add pairs of elements stride elements apart. Double stride at each iteration. (note must double buffer shared mem arrays) In 3 1 7 4 6 T0 3 1 7 4 6 Stride 1 T1 3 4 8 7 5 Active UEs: stride to n-1 (n-stride UEs) UE j adds elements j and j-stride from T0 and writes result into shared memory buffer T1 (ping-pong) Iteration #1 Stride = 1 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

A First-Attempt Parallel Scan Algorithm Read input from device memory to shared memory. Set first element to zero and shift others right by one. Iterate log(n) times: UEs stride to n: Add pairs of elements stride elements apart. Double stride at each iteration. (note must double buffer shared mem arrays) In 3 1 7 4 6 T0 3 1 7 4 6 Stride 1 T1 3 4 8 7 5 Stride 2 T0 3 4 11 12 Iteration #2 Stride = 2 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

A First-Attempt Parallel Scan Algorithm Read input from device memory to shared memory. Set first element to zero and shift others right by one. Iterate log(n) times: UEs stride to n: Add pairs of elements stride elements apart. Double stride at each iteration. (note must double buffer shared mem arrays) In 3 1 7 4 6 T0 3 1 7 4 6 Stride 1 T1 3 4 8 7 5 Stride 2 T0 3 4 11 12 T1 3 4 11 15 16 22 Iteration #3 Stride = 4 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

A First-Attempt Parallel Scan Algorithm Read input from device memory to shared memory. Set first element to zero and shift others right by one. Iterate log(n) times: UEs stride to n: Add pairs of elements stride elements apart. Double stride at each iteration. (note must double buffer shared mem arrays) Write output. In 3 1 7 4 6 T0 3 1 7 4 6 Stride 1 T1 3 4 8 7 5 Stride 2 T0 3 4 11 12 T1 3 4 11 15 16 22 Ou t 3 4 11 15 16 22 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

What is wrong with our first-attempt parallel scan? Work Efficient: A parallel algorithm is work efficient if it does the same amount of work as an optimal sequential complexity Scan executes log(n) parallel iterations The steps do n-1, n-2, n-4,... n/2 adds each Total adds: n * (log(n) – 1) + 1  O(n*log(n)) work This scan algorithm is NOT work efficient Sequential scan algorithm does n adds A factor of log(n) hurts: 20x for 10^6 elements! © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Improving Efficiency A common parallel algorithm pattern: Balanced Trees Build a balanced binary tree on the input data and sweep it to and from the root Tree is not an actual data structure, but a concept to determine what each UE does at each step For scan: Traverse down from leaves to root building partial sums at internal nodes in the tree Root holds sum of all leaves Traverse back up the tree building the scan from the partial sums © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Build the Sum Tree T 3 1 7 4 6 Assume array is already in shared memory © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Build the Sum Tree T 3 1 7 4 6 T 3 4 7 5 6 9 Stride 1 4 6 Stride 1 Iteration 1, n/2 UEs T 3 4 7 5 6 9 Each corresponds to a single UE. Iterate log(n) times. Each UE adds value stride elements away to its own value © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Build the Sum Tree T 3 1 7 4 6 Stride 1 T 3 4 7 5 6 9 Stride 2 Iteration 2, n/4 UEs T 3 4 7 11 5 6 14 Each corresponds to a single UE. Iterate log(n) times. Each UE adds value stride elements away to its own value © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Build the Sum Tree T 3 1 7 4 6 Stride 1 T 3 4 7 5 6 9 Stride 2 T 3 4 7 11 5 6 14 Stride 4 Iteration log(n), 1 UE T 3 4 7 11 5 6 25 Each corresponds to a single UE. Iterate log(n) times. Each UE adds value stride elements away to its own value. Note that this algorithm operates in-place: no need for double buffering © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Zero the Last Element T 3 4 7 11 5 6 We now have an array of partial sums. Since this is an exclusive scan, set the last element to zero. It will propagate back to the first element. © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Build Scan From Partial Sums 3 4 7 11 5 6 © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Build Scan From Partial Sums 3 4 7 11 5 6 Iteration 1 1 UE Stride 4 T 3 4 7 5 6 11 Each corresponds to a single UE. Iterate log(n) times. Each UE adds value stride elements away to its own value, and sets the value stride elements away to its own previous value. © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Build Scan From Partial Sums 3 4 7 11 5 6 Stride 4 T 3 4 7 5 6 11 Iteration 2 2 UEs Stride 2 T 3 7 4 11 6 16 Each corresponds to a single UE. Iterate log(n) times. Each UE adds value stride elements away to its own value, and sets the value stride elements away to its own previous value. © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Build Scan From Partial Sums 3 4 7 11 5 6 Stride 4 T 3 4 7 5 6 11 Stride 2 T 3 7 4 11 6 16 Iteration log(n) n/2 UEs Stride 1 T 3 4 11 15 16 22 Each corresponds to a single UE. Done! We now have a completed scan that we can write out to device memory. Total steps: 2 * log(n). Total work: 2 * (n-1) adds = O(n) Work Efficient! © David Kirk/NVIDIA and Wen-mei W. Hwu, 2007 ECE 498AL, University of Illinois, Urbana-Champaign EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Reductions Many to one Many to many Use Simply multiple reductions Also known as scatter-add and subset of parallel prefix sums Use Histograms Superposition Physical properties EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Serial Reduction When reduction operator is not associative Usually followed by a broadcast of result A[0:3] EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Tree-based Reduction n steps for 2n units of execution When reduction operator is associative Especially attractive when only one task needs result EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Recursive-doubling Reduction A[0] A[1] A[2] A[3] A[0:1] A[0:1] A[2:3] A[2:3] A[0:3] A[0:3] A[0:3] A[0:3] n steps for 2n units of execution If all units of execution need the result of the reduction EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Recursive-doubling Reduction Better than tree-based approach with broadcast Each units of execution has a copy of the reduced value at the end of n steps In tree-based approach with broadcast Reduction takes n steps Broadcast cannot begin until reduction is complete Broadcast can take n steps (architecture dependent) EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez

Other Examples More patterns More examples Reductions Scans Search Building a data structure More examples Search Sort FFT as divide and conquer Structured meshes and grids Sparse algebra Unstructured meshes and graphs Trees Collections Particles Rays EE382N: Parallelilsm and Locality, Fall 2011 -- Lecture 14 (c) Rodric Rabbah, Mattan Erez