CSE373: Data Structures & Algorithms Lecture 27: Parallel Reductions, Maps, and Algorithm Analysis Catie Baker Spring 2015.

Slides:



Advertisements
Similar presentations
Section 5: More Parallel Algorithms
Advertisements

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 2 Analysis of Fork-Join Parallel Programs Steve Wolfman, based on work by.
1 Potential for Parallel Computation Module 2. 2 Potential for Parallelism Much trivially parallel computing  Independent data, accounts  Nothing to.
CSE 332 Data Abstractions: Introduction to Parallelism and Concurrency Kate Deibel Summer 2012 August 1, 2012CSE 332 Data Abstractions, Summer
Section 4: Parallel Algorithms Michelle Kuttel
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 2 Analysis of Fork-Join Parallel Programs Steve Wolfman, based on work by.
11Sahalu JunaiduICS 573: High Performance Computing5.1 Analytical Modeling of Parallel Programs Sources of Overhead in Parallel Programs Performance Metrics.
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Dan Grossman Last Updated: November.
CSE332: Data Abstractions Lecture 20: Parallel Prefix and Parallel Sorting Dan Grossman Spring 2010.
CSE332: Data Abstractions Lecture 19: Analysis of Fork-Join Parallel Programs Dan Grossman Spring 2010.
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 2 Analysis of Fork-Join Parallel Programs Dan Grossman Last Updated: May.
CSE332: Data Abstractions Lecture 21: Parallel Prefix and Parallel Sorting Tyler Robison Summer
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Steve Wolfman, based on work by Dan.
CSE332: Data Abstractions Lecture 20: Analysis of Fork-Join Parallel Programs Tyler Robison Summer
Lecture 8-1 : Parallel Algorithms (focus on sorting algorithms) Courtesy : Prof. Chowdhury(SUNY-SB) and Prof.Grossman(UW)’s course note slides are used.
Analysis of Algorithms
Chapter 13 Recursion. Learning Objectives Recursive void Functions – Tracing recursive calls – Infinite recursion, overflows Recursive Functions that.
CSE373: Data Structures & Algorithms Lecture 27: Parallel Reductions, Maps, and Algorithm Analysis Nicki Dell Spring 2014.
CSIS 123A Lecture 9 Recursion Glenn Stevenson CSIS 113A MSJC.
Data Parallelism Task Parallel Library (TPL) The use of lambdas Map-Reduce Pattern FEN 20141UCN Teknologi/act2learn.
CSE373: Data Structures & Algorithms Lecture 22: Parallel Reductions, Maps, and Algorithm Analysis Kevin Quinn Fall 2015.
Analysis of Fork-Join Parallel Programs. Outline Done: How to use fork and join to write a parallel algorithm Why using divide-and-conquer with lots of.
Parallel Prefix, Pack, and Sorting. Outline Done: –Simple ways to use parallelism for counting, summing, finding –Analysis of running time and implications.
1 A simple parallel algorithm Adding n numbers in parallel.
Mergesort example: Merge as we return from recursive calls Merge Divide 1 element 829.
Chapter 13 Recursion Copyright © 2016 Pearson, Inc. All rights reserved.
Analysis of Algorithms
Lecture 8-1 : Parallel Algorithms (focus on sorting algorithms)
Analysis of Algorithms
CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Catie Baker Spring 2015.
COMP 53 – Week Seven Big O Sorting.
CSE 332: Analysis of Fork-Join Parallel Programs
CSE373: Data Structures & Algorithms Lecture 26: Parallel Reductions, Maps, and Algorithm Analysis Aaron Bauer Winter 2014.
Chapter 7 Sorting Spring 14
Course Description Algorithms are: Recipes for solving problems.
Instructor: Lilian de Greef Quarter: Summer 2017
NP-Completeness Yin Tat Lee
CSE332: Data Abstractions About the Final
CSCI1600: Embedded and Real Time Software
Big-Oh and Execution Time: A Review
Algorithm Analysis (not included in any exams!)
CSE373: Data Structures & Algorithms Lecture 14: Hash Collisions
CSE373: Data Structures & Algorithms Lecture 11: Implementing Union-Find Linda Shapiro Spring 2016.
Parallel Processing Sharing the load.
CSCE 411 Design and Analysis of Algorithms
Instructor: Shengyu Zhang
Big O Notation.
CSE332: Data Abstractions Lecture 20: Parallel Prefix, Pack, and Sorting Dan Grossman Spring 2012.
Data Structures and Algorithms
CSE373: Data Structures & Algorithms Lecture 5: AVL Trees
CSE332: Data Abstractions Lecture 19: Analysis of Fork-Join Parallel Programs Dan Grossman Spring 2012.
Parallelism for summation
Guest Lecture by David Johnston
Searching, Sorting, and Asymptotic Complexity
Coarse Grained Parallel Selection
CSE 373: Data Structures & Algorithms
CSE373: Data Structures & Algorithms Implementing Union-Find
PERFORMANCE MEASURES. COMPUTATIONAL MODELS Equal Duration Model:  It is assumed that a given task can be divided into n equal subtasks, each of which.
Ch. 2: Getting Started.
Amortized Analysis and Heaps Intro
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 2 Analysis of Fork-Join Parallel Programs Dan Grossman Last Updated: January.
slides created by Ethan Apter
Course Description Algorithms are: Recipes for solving problems.
CSE 332: Parallel Algorithms
Chapter 13 Recursion Copyright © 2010 Pearson Addison-Wesley. All rights reserved.
Estimating Algorithm Performance
Cryptography Lecture 18.
CSCI1600: Embedded and Real Time Software
Analysis of Algorithms
Presentation transcript:

CSE373: Data Structures & Algorithms Lecture 27: Parallel Reductions, Maps, and Algorithm Analysis Catie Baker Spring 2015

CSE373: Data Structures & Algorithms This week…. Homework 6 due today! Done with all homeworks  Course Evaluations – Time at the end of lecture Lecture Friday Final exam review Final exam next Tuesday in this room at 2.30pm Details are on the website Practice past midterms Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Outline Done: How to write a parallel algorithm with fork and join Why using divide-and-conquer with lots of small tasks is best Combines results in parallel (Assuming library can handle “lots of small threads”) Now: More examples of simple parallel programs that fit the “map” or “reduce” patterns Teaser: Beyond maps and reductions Asymptotic analysis for fork-join parallelism Amdahl’s Law Spring 2015 CSE373: Data Structures & Algorithms

What else looks like this? Saw summing an array went from O(n) sequential to O(log n) parallel (assuming a lot of processors and very large n) Exponential speed-up in theory (n / log n grows exponentially) + + + + + + + + + + + + + + + Anything that can use results from two halves and merge them in O(1) time has the same property… Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Examples Maximum or minimum element Is there an element satisfying some property (e.g., is there a 17)? Left-most element satisfying some property (e.g., first 17) Corners of a rectangle containing all points (a “bounding box”) Counts, for example, number of strings that start with a vowel This is just summing with a different base case Many problems are! Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Reductions Computations of this form are called reductions Produce single answer from collection via an associative operator Associative: a + (b+c) = (a+b) + c Examples: max, count, leftmost, rightmost, sum, product, … Non-examples: median, subtraction, exponentiation Spring 2015 CSE373: Data Structures & Algorithms

Even easier: Maps (Data Parallelism) A map operates on each element of a collection independently to create a new collection of the same size No combining results For arrays, this is so trivial some hardware has direct support Canonical example: Vector addition input 6 4 16 10 14 2 8 7 output 8 14 22 16 18 20 10 15 int[] vector_add(int[] arr1, int[] arr2){ assert (arr1.length == arr2.length); result = new int[arr1.length]; FORALL(i=0; i < arr1.length; i++) { result[i] = arr1[i] + arr2[i]; } return result; Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Maps and reductions Maps and reductions: the “workhorses” of parallel programming By far the two most important and common patterns Learn to recognize when an algorithm can be written in terms of maps and reductions Use maps and reductions to describe (parallel) algorithms Programming them becomes “trivial” with a little practice Exactly like sequential for-loops seem second-nature Spring 2015 CSE373: Data Structures & Algorithms

Beyond maps and reductions Some problems are “inherently sequential” “Six ovens can’t bake a pie in 10 minutes instead of an hour” But not all parallelizable problems are maps and reductions If had one more lecture, would show “parallel prefix”, a clever algorithm to parallelize the problem that this sequential code solves input output 6 4 16 10 14 2 8 26 36 52 66 68 76 int[] prefix_sum(int[] input){ int[] output = new int[input.length]; output[0] = input[0]; for(int i=1; i < input.length; i++) output[i] = output[i-1]+input[i]; return output; } Spring 2015 CSE373: Data Structures & Algorithms

Digression: MapReduce on clusters You may have heard of Google’s “map/reduce” Or the open-source version Hadoop Idea: Perform maps/reduces on data using many machines The system takes care of distributing the data and managing fault tolerance You just write code to map one element and reduce elements to a combined result Separates how to do recursive divide-and-conquer from what computation to perform Separating concerns is good software engineering Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Analyzing algorithms Like all algorithms, parallel algorithms should be: Correct Efficient For our algorithms so far, correctness is “obvious” so we’ll focus on efficiency Want asymptotic bounds Want to analyze the algorithm without regard to a specific number of processors Here: Identify the “best we can do” if the underlying thread-scheduler does its part Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Work and Span Let TP be the running time if there are P processors available Two key measures of run-time: Work: How long it would take 1 processor = T1 Just “sequentialize” the recursive forking Span: How long it would take infinite processors = T The longest dependence-chain Example: O(log n) for summing an array Notice having > n/2 processors is no additional help Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Our simple examples Picture showing all the “stuff that happens” during a reduction or a map: it’s a (conceptual!) DAG divide base cases combine results Spring 2015 CSE373: Data Structures & Algorithms

Connecting to performance Recall: TP = running time if there are P processors available Work = T1 = sum of run-time of all nodes in the DAG That lonely processor does everything Any topological sort is a legal execution O(n) for maps and reductions Span = T = sum of run-time of all nodes on the most-expensive path in the DAG Note: costs are on the nodes not the edges Our infinite army can do everything that is ready to be done, but still has to wait for earlier results O(log n) for simple maps and reductions Spring 2015 CSE373: Data Structures & Algorithms

Speed-up Parallel algorithms is about decreasing span without increasing work too much Speed-up on P processors: T1 / TP Parallelism is the maximum possible speed-up: T1 / T  At some point, adding processors won’t help What that point is depends on the span In practice we have P processors. How well can we do? We cannot do better than O(T ) (“must obey the span”) We cannot do better than O(T1 / P) (“must do all the work”) Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Examples TP = O(max((T1 / P) ,T )) In the algorithms seen so far (e.g., sum an array): T1 = O(n) T = O(log n) So expect (ignoring overheads): TP = O(max(n/P, log n)) Suppose instead: T1 = O(n2) T = O(n) So expect (ignoring overheads): TP = O(max(n2/P, n)) Spring 2015 CSE373: Data Structures & Algorithms

Amdahl’s Law (mostly bad news) So far: analyze parallel programs in terms of work and span In practice, typically have parts of programs that parallelize well… Such as maps/reductions over arrays …and parts that don’t parallelize at all Such as reading a linked list, getting input, doing computations where each needs the previous step, etc. Spring 2015 CSE373: Data Structures & Algorithms

Amdahl’s Law (mostly bad news) Let the work (time to run on 1 processor) be 1 unit time Let S be the portion of the execution that can’t be parallelized Then: T1 = S + (1-S) = 1 Suppose parallel portion parallelizes perfectly (generous assumption) Then: TP = S + (1-S)/P So the overall speedup with P processors is (Amdahl’s Law): T1 / TP = 1 / (S + (1-S)/P) And the parallelism (infinite processors) is: T1 / T = 1 / S Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Why such bad news T1 / TP = 1 / (S + (1-S)/P) T1 / T = 1 / S Suppose 33% of a program’s execution is sequential Then a billion processors won’t give a speedup over 3 Suppose you miss the good old days (1980-2005) where 12ish years was long enough to get 100x speedup Now suppose in 12 years, clock speed is the same but you get 256 processors instead of 1 For 256 processors to get at least 100x speedup, we need 100  1 / (S + (1-S)/256) Which means S  .0061 (i.e., 99.4% perfectly parallelizable) Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms All is not lost Amdahl’s Law is a bummer! Unparallelized parts become a bottleneck very quickly But it doesn’t mean additional processors are worthless We can find new parallel algorithms Some things that seem sequential are actually parallelizable We can change the problem or do new things Example: computer graphics use tons of parallel processors Graphics Processing Units (GPUs) are massively parallel They are not rendering 10-year-old graphics faster They are rendering more detailed/sophisticated images Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Moore and Amdahl Moore’s “Law” is an observation about the progress of the semiconductor industry Transistor density doubles roughly every 18 months Amdahl’s Law is a mathematical theorem Diminishing returns of adding more processors Both are incredibly important in designing computer systems Spring 2015 CSE373: Data Structures & Algorithms

CSE373: Data Structures & Algorithms Course evals…. PLEASE do them I’m giving you time now  What you liked, what you didn’t like https://uw.iasystem.org/survey/146029 Spring 2015 CSE373: Data Structures & Algorithms