A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Dan Grossman Last Updated: November.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Quicksort
Advertisements

Section 5: More Parallel Algorithms
Algorithms Analysis Lecture 6 Quicksort. Quick Sort Divide and Conquer.
Sorting Comparison-based algorithm review –You should know most of the algorithms –We will concentrate on their analyses –Special emphasis: Heapsort Lower.
CSE332: Data Abstractions Lecture 14: Beyond Comparison Sorting Dan Grossman Spring 2010.
Quick Sort, Shell Sort, Counting Sort, Radix Sort AND Bucket Sort
CS4413 Divide-and-Conquer
CSE 332 Data Abstractions: Introduction to Parallelism and Concurrency Kate Deibel Summer 2012 August 1, 2012CSE 332 Data Abstractions, Summer
Efficient Sorts. Divide and Conquer Divide and Conquer : chop a problem into smaller problems, solve those – Ex: binary search.
Spring 2015 Lecture 5: QuickSort & Selection
Quicksort, Mergesort, and Heapsort. Quicksort Fastest known sorting algorithm in practice  Caveats: not stable  Vulnerable to certain attacks Average.
Sorting Algorithms and Average Case Time Complexity
1 Sorting Problem: Given a sequence of elements, find a permutation such that the resulting sequence is sorted in some order. We have already seen: –Insertion.
CS 171: Introduction to Computer Science II Quicksort.
CS 206 Introduction to Computer Science II 04 / 27 / 2009 Instructor: Michael Eckmann.
CSE332: Data Abstractions Lecture 13: Comparison Sorting Tyler Robison Summer
CSE332: Data Abstractions Lecture 20: Parallel Prefix and Parallel Sorting Dan Grossman Spring 2010.
CSE332: Data Abstractions Lecture 13: Comparison Sorting
CSE332: Data Abstractions Lecture 21: Parallel Prefix and Parallel Sorting Tyler Robison Summer
Sorting Heapsort Quick review of basic sorting methods Lower bounds for comparison-based methods Non-comparison based sorting.
CS 206 Introduction to Computer Science II 12 / 05 / 2008 Instructor: Michael Eckmann.
CS2420: Lecture 10 Vladimir Kulyukin Computer Science Department Utah State University.
CS 206 Introduction to Computer Science II 12 / 03 / 2008 Instructor: Michael Eckmann.
Analysis of Algorithms CS 477/677
Sorting CS-212 Dick Steflik. Exchange Sorting Method : make n-1 passes across the data, on each pass compare adjacent items, swapping as necessary (n-1.
Design and Analysis of Algorithms – Chapter 51 Divide and Conquer (I) Dr. Ying Lu RAIK 283: Data Structures & Algorithms.
Sorting (Part II: Divide and Conquer) CSE 373 Data Structures Lecture 14.
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Steve Wolfman, based on work by Dan.
CSC 213 Lecture 12: Quick Sort & Radix/Bucket Sort.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
CSCE 3110 Data Structures & Algorithm Analysis Sorting (I) Reading: Chap.7, Weiss.
Merge Sort. What Is Sorting? To arrange a collection of items in some specified order. Numerical order Lexicographical order Input: sequence of numbers.
Quicksort, Mergesort, and Heapsort. Quicksort Fastest known sorting algorithm in practice  Caveats: not stable  Vulnerable to certain attacks Average.
 2006 Pearson Education, Inc. All rights reserved Searching and Sorting.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
CSE332: Data Abstractions Lecture 14: Beyond Comparison Sorting Dan Grossman Spring 2012.
CS 61B Data Structures and Programming Methodology July 28, 2008 David Sun.
Analysis of Algorithms CS 477/677
September 29, Algorithms and Data Structures Lecture V Simonas Šaltenis Aalborg University
1 Joe Meehean.  Problem arrange comparable items in list into sorted order  Most sorting algorithms involve comparing item values  We assume items.
Adapted from instructor resource slides Nyhoff, ADTs, Data Structures and Problem Solving with C++, Second Edition, © 2005 Pearson Education, Inc. All.
CSE373: Data Structures & Algorithms Lecture 27: Parallel Reductions, Maps, and Algorithm Analysis Nicki Dell Spring 2014.
Sorting. Pseudocode of Insertion Sort Insertion Sort To sort array A[0..n-1], sort A[0..n-2] recursively and then insert A[n-1] in its proper place among.
CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting Aaron Bauer Winter 2014.
1 Sorting Algorithms Sections 7.1 to Comparison-Based Sorting Input – 2,3,1,15,11,23,1 Output – 1,1,2,3,11,15,23 Class ‘Animals’ – Sort Objects.
CSE373: Data Structure & Algorithms Lecture 22: More Sorting Linda Shapiro Winter 2015.
CS 206 Introduction to Computer Science II 04 / 22 / 2009 Instructor: Michael Eckmann.
Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2.Solve smaller instances.
Week 13 - Friday.  What did we talk about last time?  Sorting  Insertion sort  Merge sort  Started quicksort.
1 Heapsort, Mergesort, and Quicksort Sections 7.5 to 7.7.
Sorting – Part II CS 367 – Introduction to Data Structures.
Chapter 4, Part II Sorting Algorithms. 2 Heap Details A heap is a tree structure where for each subtree the value stored at the root is larger than all.
PREVIOUS SORTING ALGORITHMS  BUBBLE SORT –Time Complexity: O(n 2 ) For each item, make (n –1) comparisons Gives: Comparisons = (n –1) + (n – 2)
Week 13 - Wednesday.  What did we talk about last time?  NP-completeness.
CSE332: Data Abstractions Lecture 13: Comparison Sorting Dan Grossman Spring 2012.
Chapter 9: Sorting1 Sorting & Searching Ch. # 9. Chapter 9: Sorting2 Chapter Outline  What is sorting and complexity of sorting  Different types of.
CSE373: Data Structures & Algorithms Lecture 22: Parallel Reductions, Maps, and Algorithm Analysis Kevin Quinn Fall 2015.
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Steve Wolfman, based on work by Dan.
SORTING AND ASYMPTOTIC COMPLEXITY Lecture 13 CS2110 – Fall 2009.
Data Structures and Algorithms Instructor: Tesfaye Guta [M.Sc.] Haramaya University.
Sorting – Lecture 3 More about Merge Sort, Quick Sort.
Parallel Prefix, Pack, and Sorting. Outline Done: –Simple ways to use parallelism for counting, summing, finding –Analysis of running time and implications.
CMPT 238 Data Structures More on Sorting: Merge Sort and Quicksort.
Mergesort example: Merge as we return from recursive calls Merge Divide 1 element 829.
Instructor: Lilian de Greef Quarter: Summer 2017
CSE373: Data Structures & Algorithms Lecture 27: Parallel Reductions, Maps, and Algorithm Analysis Catie Baker Spring 2015.
CSE332: Data Abstractions Lecture 20: Parallel Prefix, Pack, and Sorting Dan Grossman Spring 2012.
CSE 332: Parallel Algorithms
Divide and Conquer Merge sort and quick sort Binary search
Presentation transcript:

A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Dan Grossman Last Updated: November 2012 For more information, see

Outline Done: –Simple ways to use parallelism for counting, summing, finding –Analysis of running time and implications of Amdahl’s Law Now: Clever ways to parallelize more than is intuitively possible –Parallel prefix: This “key trick” typically underlies surprising parallelization Enables other things like packs –Parallel sorting: quicksort (not in place) and mergesort Easy to get a little parallelism With cleverness can get a lot 2Sophomoric Parallelism and Concurrency, Lecture 3

The prefix-sum problem Given int[] input, produce int[] output where output[i] is the sum of input[0]+input[1]+…+input[i] Sequential can be a CS1 exam problem: 3Sophomoric Parallelism and Concurrency, Lecture 3 int[] prefix_sum(int[] input){ int[] output = new int[input.length]; output[0] = input[0]; for(int i=1; i < input.length; i++) output[i] = output[i-1]+input[i]; return output; } Does not seem parallelizable –Work: O(n), Span: O(n) –This algorithm is sequential, but a different algorithm has Work: O(n), Span: O( log n)

Parallel prefix-sum The parallel-prefix algorithm does two passes –Each pass has O(n) work and O( log n) span –So in total there is O(n) work and O( log n) span –So like with array summing, parallelism is n/ log n An exponential speedup First pass builds a tree bottom-up: the “up” pass Second pass traverses the tree top-down: the “down” pass Historical note: –Original algorithm due to R. Ladner and M. Fischer at the University of Washington in Sophomoric Parallelism and Concurrency, Lecture 3

Example 5Sophomoric Parallelism and Concurrency, Lecture 3 input output range 0,8 sum fromleft range 0,4 sum fromleft range 4,8 sum fromleft range 6,8 sum fromleft range 4,6 sum fromleft range 2,4 sum fromleft range 0,2 sum fromleft r 0,1 s f r 1,2 s f r 2,3 s f r 3,4 s f r 4,5 s f r 5,6 s f r 6,7 s f r 7,8 s f

Example 6Sophomoric Parallelism and Concurrency, Lecture 3 input output range 0,8 sum fromleft range 0,4 sum fromleft range 4,8 sum fromleft range 6,8 sum fromleft range 4,6 sum fromleft range 2,4 sum fromleft range 0,2 sum fromleft r 0,1 s f r 1,2 s f r 2,3 s f r 3,4 s f r 4,5 s f r 5,6 s f r 6,7 s f r 7,8 s f

The algorithm, part 1 1.Up: Build a binary tree where –Root has sum of the range [ x,y ) –If a node has sum of [ lo,hi ) and hi>lo, Left child has sum of [ lo,middle ) Right child has sum of [ middle,hi ) A leaf has sum of [ i,i+1 ), i.e., input[i] This is an easy fork-join computation: combine results by actually building a binary tree with all the range-sums –Tree built bottom-up in parallel –Could be more clever with an array like with heaps Analysis: O(n) work, O( log n) span 7Sophomoric Parallelism and Concurrency, Lecture 3

The algorithm, part 2 2.Down: Pass down a value fromLeft –Root given a fromLeft of 0 –Node takes its fromLeft value and Passes its left child the same fromLeft Passes its right child its fromLeft plus its left child’s sum (as stored in part 1) –At the leaf for array position i, output[i]=fromLeft+input[i] This is an easy fork-join computation: traverse the tree built in step 1 and produce no result –Leaves assign to output –Invariant: fromLeft is sum of elements left of the node’s range Analysis: O(n) work, O( log n) span 8Sophomoric Parallelism and Concurrency, Lecture 3

Sequential cut-off Adding a sequential cut-off is easy as always: Up: just a sum, have leaf node hold the sum of a range Down: output[lo] = fromLeft + input[lo]; for(i=lo+1; i < hi; i++) output[i] = output[i-1] + input[i] 9Sophomoric Parallelism and Concurrency, Lecture 3

Parallel prefix, generalized Just as sum-array was the simplest example of a common pattern, prefix-sum illustrates a pattern that arises in many, many problems Minimum, maximum of all elements to the left of i Is there an element to the left of i satisfying some property? Count of elements to the left of i satisfying some property –This last one is perfect for an efficient parallel pack… –Perfect for building on top of the “parallel prefix trick” We did an inclusive sum, but exclusive is just as easy 10Sophomoric Parallelism and Concurrency, Lecture 3

Pack [Non-standard terminology] Given an array input, produce an array output containing only elements such that f(elt) is true Example: input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24] f: is elt > 10 output [17, 11, 13, 19, 24] Parallelizable? –Finding elements for the output is easy –But getting them in the right place seems hard 11Sophomoric Parallelism and Concurrency, Lecture 3

Parallel prefix to the rescue 1.Parallel map to compute a bit-vector for true elements input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24] bits [1, 0, 0, 0, 1, 0, 1, 1, 0, 1] 2.Parallel-prefix sum on the bit-vector bitsum [1, 1, 1, 1, 2, 2, 3, 4, 4, 5] 3.Parallel map to produce the output output [17, 11, 13, 19, 24] 12Sophomoric Parallelism and Concurrency, Lecture 3 output = new array of size bitsum[n-1] FORALL(i=0; i < input.length; i++){ if(bits[i]==1) output[bitsum[i]-1] = input[i]; }

Pack comments First two steps can be combined into one pass –Just using a different base case for the prefix sum –No effect on asymptotic complexity Can also combine third step into the down pass of the prefix sum –Again no effect on asymptotic complexity Analysis: O(n) work, O( log n) span –2 or 3 passes, but 3 is a constant Parallelized packs will help us parallelize quicksort… 13Sophomoric Parallelism and Concurrency, Lecture 3

Quicksort review Recall quicksort was sequential, in-place, expected time O(n log n) 14Sophomoric Parallelism and Concurrency, Lecture 3 Best / expected case work 1.Pick a pivot element O(1) 2.Partition all the data into: O(n) A.The elements less than the pivot B.The pivot C.The elements greater than the pivot 3.Recursively sort A and C 2T(n/2) How should we parallelize this?

Quicksort 15Sophomoric Parallelism and Concurrency, Lecture 3 Best / expected case work 1.Pick a pivot element O(1) 2.Partition all the data into: O(n) A.The elements less than the pivot B.The pivot C.The elements greater than the pivot 3.Recursively sort A and C 2T(n/2) Easy: Do the two recursive calls in parallel Work: unchanged of course O(n log n) Span: now T(n) = O(n) + 1T(n/2) = O(n) So parallelism (i.e., work / span) is O( log n)

Doing better O( log n) speed-up with an infinite number of processors is okay, but a bit underwhelming –Sort 10 9 elements 30 times faster Google searches strongly suggest quicksort cannot do better because the partition cannot be parallelized –The Internet has been known to be wrong –But we need auxiliary storage (no longer in place) –In practice, constant factors may make it not worth it, but remember Amdahl’s Law Already have everything we need to parallelize the partition… 16Sophomoric Parallelism and Concurrency, Lecture 3

Parallel partition (not in place) This is just two packs! –We know a pack is O(n) work, O( log n) span –Pack elements less than pivot into left side of aux array –Pack elements greater than pivot into right size of aux array –Put pivot between them and recursively sort –With a little more cleverness, can do both packs at once but no effect on asymptotic complexity With O( log n) span for partition, the total best-case and expected-case span for quicksort is T(n) = O( log n) + 1T(n/2) = O( log 2 n) 17Sophomoric Parallelism and Concurrency, Lecture 3 Partition all the data into: A.The elements less than the pivot B.The pivot C.The elements greater than the pivot

Example Step 1: pick pivot as median of three 18Sophomoric Parallelism and Concurrency, Lecture Steps 2a and 2c (combinable): pack less than, then pack greater than into a second array –Fancy parallel prefix to pull this off not shown Step 3: Two recursive sorts in parallel –Can sort back into original array (like in mergesort)

Now mergesort Recall mergesort: sequential, not-in-place, worst-case O(n log n) 19Sophomoric Parallelism and Concurrency, Lecture 3 1.Sort left half and right half 2T(n/2) 2.Merge results O(n) Just like quicksort, doing the two recursive sorts in parallel changes the recurrence for the span to T(n) = O(n) + 1T(n/2) = O(n) Again, parallelism is O( log n) To do better, need to parallelize the merge –The trick won’t use parallel prefix this time

Parallelizing the merge Need to merge two sorted subarrays (may not have the same size) 20Sophomoric Parallelism and Concurrency, Lecture Idea: Suppose the larger subarray has m elements. In parallel: Merge the first m/2 elements of the larger half with the “appropriate” elements of the smaller half Merge the second m/2 elements of the larger half with the rest of the smaller half

Parallelizing the merge 21Sophomoric Parallelism and Concurrency, Lecture

Parallelizing the merge 22Sophomoric Parallelism and Concurrency, Lecture Get median of bigger half: O(1) to compute middle index

Parallelizing the merge 23Sophomoric Parallelism and Concurrency, Lecture Get median of bigger half: O(1) to compute middle index 2.Find how to split the smaller half at the same value as the left- half split: O( log n) to do binary search on the sorted small half

Parallelizing the merge 24Sophomoric Parallelism and Concurrency, Lecture Get median of bigger half: O(1) to compute middle index 2.Find how to split the smaller half at the same value as the left- half split: O( log n) to do binary search on the sorted small half 3.Size of two sub-merges conceptually splits output array: O(1)

Parallelizing the merge 25Sophomoric Parallelism and Concurrency, Lecture Get median of bigger half: O(1) to compute middle index 2.Find how to split the smaller half at the same value as the left- half split: O( log n) to do binary search on the sorted small half 3.Size of two sub-merges conceptually splits output array: O(1) 4.Do two submerges in parallel lo hi

The Recursion 26Sophomoric Parallelism and Concurrency, Lecture When we do each merge in parallel, we split the bigger one in half and use binary search to split the smaller one 7698

Analysis Sequential recurrence for mergesort: T(n) = 2T(n/2) + O(n) which is O(n log n) Doing the two recursive calls in parallel but a sequential merge: Work: same as sequential Span: T(n)=1T(n/2)+O(n) which is O(n) Parallel merge makes work and span harder to compute –Each merge step does an extra O( log n) binary search to find how to split the smaller subarray –To merge n elements total, do two smaller merges of possibly different sizes –But worst-case split is (1/4)n and (3/4)n When subarrays same size and “smaller” splits “all” / “none” 27Sophomoric Parallelism and Concurrency, Lecture 3

Analysis continued For just a parallel merge of n elements: Work is T(n) = T(3n/4) + T(n/4) + O( log n) which is O(n) Span is T(n) = T(3n/4) + O( log n), which is O( log 2 n) (neither bound is immediately obvious, but “trust me”) So for mergesort with parallel merge overall: Work is T(n) = 2T(n/2) + O(n), which is O(n log n) Span is T(n) = 1T(n/2) + O( log 2 n), which is O( log 3 n) So parallelism (work / span) is O(n / log 2 n) –Not quite as good as quicksort’s O(n / log n) But worst-case guarantee –And as always this is just the asymptotic result 28Sophomoric Parallelism and Concurrency, Lecture 3