CSE332: Data Abstractions Lecture 21: Parallel Prefix and Parallel Sorting Tyler Robison Summer 2010 1.

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Quicksort
Advertisements

Section 5: More Parallel Algorithms
Algorithms Analysis Lecture 6 Quicksort. Quick Sort Divide and Conquer.
Sorting Comparison-based algorithm review –You should know most of the algorithms –We will concentrate on their analyses –Special emphasis: Heapsort Lower.
CSE332: Data Abstractions Lecture 14: Beyond Comparison Sorting Dan Grossman Spring 2010.
Quick Sort, Shell Sort, Counting Sort, Radix Sort AND Bucket Sort
Chapter 4: Divide and Conquer Master Theorem, Mergesort, Quicksort, Binary Search, Binary Trees The Design and Analysis of Algorithms.
CSE 332 Data Abstractions: Introduction to Parallelism and Concurrency Kate Deibel Summer 2012 August 1, 2012CSE 332 Data Abstractions, Summer
Efficient Sorts. Divide and Conquer Divide and Conquer : chop a problem into smaller problems, solve those – Ex: binary search.
Spring 2015 Lecture 5: QuickSort & Selection
Sorting Algorithms and Average Case Time Complexity
1 Sorting Problem: Given a sequence of elements, find a permutation such that the resulting sequence is sorted in some order. We have already seen: –Insertion.
Quicksort Divide-and-Conquer. Quicksort Algorithm Given an array S of n elements (e.g., integers): If array only contains one element, return it. Else.
CS 171: Introduction to Computer Science II Quicksort.
CS 206 Introduction to Computer Science II 04 / 27 / 2009 Instructor: Michael Eckmann.
September 19, Algorithms and Data Structures Lecture IV Simonas Šaltenis Nykredit Center for Database Research Aalborg University
CS 206 Introduction to Computer Science II 12 / 09 / 2009 Instructor: Michael Eckmann.
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Dan Grossman Last Updated: November.
CSE332: Data Abstractions Lecture 20: Parallel Prefix and Parallel Sorting Dan Grossman Spring 2010.
CSE332: Data Abstractions Lecture 9: BTrees Tyler Robison Summer
CS 206 Introduction to Computer Science II 12 / 05 / 2008 Instructor: Michael Eckmann.
CS 206 Introduction to Computer Science II 12 / 03 / 2008 Instructor: Michael Eckmann.
Analysis of Algorithms CS 477/677
CSC 2300 Data Structures & Algorithms March 20, 2007 Chapter 7. Sorting.
Sorting CS-212 Dick Steflik. Exchange Sorting Method : make n-1 passes across the data, on each pass compare adjacent items, swapping as necessary (n-1.
Sorting (Part II: Divide and Conquer) CSE 373 Data Structures Lecture 14.
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Steve Wolfman, based on work by Dan.
1 Time Analysis Analyzing an algorithm = estimating the resources it requires. Time How long will it take to execute? Impossible to find exact value Depends.
CSC 213 Lecture 12: Quick Sort & Radix/Bucket Sort.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
Order Statistics The ith order statistic in a set of n elements is the ith smallest element The minimum is thus the 1st order statistic The maximum is.
Merge Sort. What Is Sorting? To arrange a collection of items in some specified order. Numerical order Lexicographical order Input: sequence of numbers.
Sorting with Heaps Observation: Removal of the largest item from a heap can be performed in O(log n) time Another observation: Nodes are removed in order.
Quicksort, Mergesort, and Heapsort. Quicksort Fastest known sorting algorithm in practice  Caveats: not stable  Vulnerable to certain attacks Average.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
Sorting Fun1 Chapter 4: Sorting     29  9.
CSE332: Data Abstractions Lecture 14: Beyond Comparison Sorting Dan Grossman Spring 2012.
CS 61B Data Structures and Programming Methodology July 28, 2008 David Sun.
Analysis of Algorithms CS 477/677
September 29, Algorithms and Data Structures Lecture V Simonas Šaltenis Aalborg University
1 Joe Meehean.  Problem arrange comparable items in list into sorted order  Most sorting algorithms involve comparing item values  We assume items.
CSE373: Data Structures & Algorithms Lecture 27: Parallel Reductions, Maps, and Algorithm Analysis Nicki Dell Spring 2014.
Sorting. Pseudocode of Insertion Sort Insertion Sort To sort array A[0..n-1], sort A[0..n-2] recursively and then insert A[n-1] in its proper place among.
CSE373: Data Structure & Algorithms Lecture 21: More Comparison Sorting Aaron Bauer Winter 2014.
CSC 211 Data Structures Lecture 13
1 Sorting Algorithms Sections 7.1 to Comparison-Based Sorting Input – 2,3,1,15,11,23,1 Output – 1,1,2,3,11,15,23 Class ‘Animals’ – Sort Objects.
CSE373: Data Structure & Algorithms Lecture 22: More Sorting Linda Shapiro Winter 2015.
CS 206 Introduction to Computer Science II 04 / 22 / 2009 Instructor: Michael Eckmann.
Sorting: Implementation Fundamental Data Structures and Algorithms Klaus Sutner February 24, 2004.
Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2.Solve smaller instances.
Week 13 - Friday.  What did we talk about last time?  Sorting  Insertion sort  Merge sort  Started quicksort.
COSC 3101A - Design and Analysis of Algorithms 6 Lower Bounds for Sorting Counting / Radix / Bucket Sort Many of these slides are taken from Monica Nicolescu,
Searching and Sorting Searching: Sequential, Binary Sorting: Selection, Insertion, Shell.
PREVIOUS SORTING ALGORITHMS  BUBBLE SORT –Time Complexity: O(n 2 ) For each item, make (n –1) comparisons Gives: Comparisons = (n –1) + (n – 2)
Week 13 - Wednesday.  What did we talk about last time?  NP-completeness.
CSE373: Data Structures & Algorithms Lecture 22: Parallel Reductions, Maps, and Algorithm Analysis Kevin Quinn Fall 2015.
Quicksort This is probably the most popular sorting algorithm. It was invented by the English Scientist C.A.R. Hoare It is popular because it works well.
QuickSort. Yet another sorting algorithm! Usually faster than other algorithms on average, although worst-case is O(n 2 ) Divide-and-conquer: –Divide:
A Sophomoric Introduction to Shared-Memory Parallelism and Concurrency Lecture 3 Parallel Prefix, Pack, and Sorting Steve Wolfman, based on work by Dan.
Data Structures and Algorithms Instructor: Tesfaye Guta [M.Sc.] Haramaya University.
Parallel Prefix, Pack, and Sorting. Outline Done: –Simple ways to use parallelism for counting, summing, finding –Analysis of running time and implications.
CS6045: Advanced Algorithms Sorting Algorithms. Sorting So Far Insertion sort: –Easy to code –Fast on small inputs (less than ~50 elements) –Fast on nearly-sorted.
CMPT 238 Data Structures More on Sorting: Merge Sort and Quicksort.
Mergesort example: Merge as we return from recursive calls Merge Divide 1 element 829.
Instructor: Lilian de Greef Quarter: Summer 2017
CSE373: Data Structures & Algorithms Lecture 27: Parallel Reductions, Maps, and Algorithm Analysis Catie Baker Spring 2015.
CSE332: Data Abstractions Lecture 20: Parallel Prefix, Pack, and Sorting Dan Grossman Spring 2012.
CSE 332: Parallel Algorithms
Divide and Conquer Merge sort and quick sort Binary search
Presentation transcript:

CSE332: Data Abstractions Lecture 21: Parallel Prefix and Parallel Sorting Tyler Robison Summer

What next? 2 Done:  Simple ways to use parallelism for counting, summing, finding elements  Analysis of running time and implications of Amdahl’s Law Now:  Clever ways to parallelize more effectively than is intuitively possible  Parallel prefix:  This “key trick” typically underlies surprising parallelization  Enables other things like filters  Parallel sorting: quicksort (not in place) and mergesort  Easy to get a little parallelism  With cleverness can get a lot

The prefix-sum problem 3 Given int[] input, produce int[] output where output[i] is the sum of input[0]+input[1]+…input[i] Sequential is easy enough for a CSE142 exam: int[] prefix_sum(int[] input){ int[] output = new int[input.length]; output[0] = input[0]; for(int i=1; i < input.length; i++) output[i] = output[i-1]+input[i]; return output; } This does not appear to be parallelizable; each cell depends on previous cell – Work: O(n), Span: O(n) – This algorithm is sequential, but we can design a different algorithm with parallelism for the same problem in out

The Parallel Prefix-Sum Algorithm 4 The parallel-prefix algorithm has O(n) work but a span of 2 log n  So span is O( log n) and parallelism is n/ log n, an exponential speedup just like array summing  The 2 is because there will be two “passes” on the tree – more later  Historical note / local bragging:  Original algorithm due to R. Ladner and M. Fischer in 1977  Richard Ladner joined the UW faculty in 1971 and hasn’t left 1968? 1973?recent

5 input output range 0,8 sum fromleft range 0,4 sum fromleft range 4,8 sum fromleft range 6,8 sum fromleft range 4,6 sum fromleft range 2,4 sum fromleft range 0,2 sum fromleft r 0,1 s f r 1,2 s f r 2,3 s f r 3,4 s f r 4,5 s f r 5,6 s f r 6,7 s f r 7.8 s f The (completely non-obvious) idea: Do an initial pass to gather information, enabling us to do a second pass to get the answer First we’ll gather the ‘sum’ for each recursive block

First pass 6 input output range 0,8 sum fromleft range 0,4 sum fromleft range 4,8 sum fromleft range 6,8 sum fromleft range 4,6 sum fromleft range 2,4 sum fromleft range 0,2 sum fromleft r 0,1 s f r 1,2 s f r 2,3 s f r 3,4 s f r 4,5 s f r 5,6 s f r 6,7 s f r 7.8 s f For each node, get the sum of all values in its range; propagate sum up from leaves Will work like parallel sum, but recording intermediate information

Second pass 7 input output range 0,8 sum fromleft range 0,4 sum fromleft range 4,8 sum fromleft range 6,8 sum fromleft range 4,6 sum fromleft range 2,4 sum fromleft range 0,2 sum fromleft r 0,1 s f r 1,2 s f r 2,3 s f r 3,4 s f r 4,5 s f r 5,6 s f r 6,7 s f r 7.8 s f Using ‘sum’, get the sum of everything to the left of this range (call it ‘fromleft’); propagate down from root

The algorithm, part Propagate ‘sum’ up: Build a binary tree where  Root has sum of input[0]..input[n-1]  Each node has sum of input[lo]..input[hi-1]  Build up from leaves; parent.sum=left.sum+right.sum  A leaf’s sum is just it’s value; input[i] This is an easy fork-join computation: combine results by actually building a binary tree with all the sums of ranges  Tree built bottom-up in parallel  Could be more clever; ex: heap-like ‘array as tree’ representation Analysis of this step: O(n) work, O( log n) span

The algorithm, part Propagate ‘fromleft’ down:  Root given a fromLeft of 0  Node takes its fromLeft value and  Passes its left child the same fromLeft  Passes its right child its fromLeft plus its left child’s sum (as stored in part 1)  At the leaf for array position i, output[i]=fromLeft+input[i] This is another fork-join computation: traverse the tree built in step 1 and assign to output at leaves (don’t return a result) Analysis of this step: O(n) work, O( log n) span Total for algorithm: O(n) work, O( log n) span

Sequential cut-off 10 Adding a sequential cut-off isn’t too bad:  Step One: Propagating Up: Sequentially compute sum for range The tree itself will be shallower  Step Two: Propagating Down: output[lo] = fromLeft + input[lo]; for(i=lo+1; i < hi; i++) output[i] = output[i-1] + input[i]

Parallel prefix, generalized 11 Just as sum-array was the simplest example of a pattern that matches many problems, so is prefix-sum  Array that stores minimum/maximum of all elements to the left of i, for any i  Is there an element to the left of i satisfying some property?  Count of all elements to the left of i satisfying some property  We did an inclusive sum, but exclusive is just as easy

12 input output range 0,8 min fromleft range 0,4 min fromleft range 4,8 min fromleft range 6,8 min fromleft range 4,6 min fromleft range 2,4 min fromleft range 0,2 min fromleft r 0,1 m f r 1,2 m f r 2,3 m f r 3,4 m f r 4,5 m f r 5,6 m f r 6,7 m f r 7.8 m f ‘Min to the left of i‘: Step One: Find ‘min’ of each range Step Two: Find ‘fromleft’

Filter 13 [Non-standard terminology] Given an array input, produce an array output containing only elements such that f(elt) is true Example: input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24] f: is elt > 10 output [17, 11, 13, 19, 24] Looks hard to parallelize  Determining whether an element belongs in the output is easy  But getting them in the right place in the output is hard; seems to depend on previous results

Parallel prefix to the rescue Use a parallel map to compute a bit-vector for true elements input [17, 4, 6, 8, 11, 5, 13, 19, 0, 24] bits [1, 0, 0, 0, 1, 0, 1, 1, 0, 1] 2. Do parallel-prefix sum on the bit-vector bitsum [1, 1, 1, 1, 2, 2, 3, 4, 4, 5] 3. Allocate an output array with size bitsum[input.length-1] 4. Use a parallel map on input; if element i passes test, put it in output at index bitsum[i]-1 Result: output [17, 11, 13, 19, 24] output = new array of size bitsum[n-1] if(bitsum[0]==1) output[0] = input[0]; FORALL(i=1; i < input.length; i++) if(bitsum[i] > bitsum[i-1]) output[bitsum[i]-1] = input[i]; Filter: elt > 10

Filter comments 15  First two steps can be combined into one pass  Just using a different base case for the prefix sum  Has no effect on asymptotic complexity  Analysis: O(n) work, O( log n) span  3 or so passes, but 3 is a constant  We’ll use a parallelized filters to parallelize quicksort

Quicksort review 16 Recall quicksort was sequential, in-place, expected time O(n log n) (and not stable) Best / expected case work 1.Pick a pivot element O(1) 2.Partition all the data into: O(n) A.The elements less than the pivot B.The pivot C.The elements greater than the pivot 3.Recursively sort A and C 2T(n/2) Recurrence (assuming a good pivot): T(0)=T(1)=1 T(n)=2T(n/2) + n Run-time: O(nlogn) How should we parallelize this?

Quicksort 17 Best / expected case work 1.Pick a pivot element O(1) 2.Partition all the data into: O(n) A.The elements less than the pivot B.The pivot C.The elements greater than the pivot 3.Recursively sort A and C 2T(n/2) First: Do the two recursive calls in parallel Work: unchanged of course O ( n log n ) Now recurrence takes the form: O(n) + 1T(n/2) So O(n) span So parallelism (i.e., work/span) is O (log n )

Doing better 18  An O(log n) speed-up with an infinite number of processors is okay, but a bit underwhelming  Sort 10 9 elements 30 times faster is decent…  Google searches suggest quicksort cannot do better because the partition cannot be parallelized  The Internet has been known to be wrong  But we need auxiliary storage (no longer in place)  In practice, constant factors may make it not worth it, but remember Amdahl’s Law  Already have everything we need to parallelize the partition…

Parallel partition (not in place) 19  This is just two filters!  We know a filter is O(n) work, O( log n) span  Filter elements less than pivot into left side of aux array  Filter elements great than pivot into right size of aux array  Put pivot in-between them and recursively sort  With a little more cleverness, can do both filters at once but no effect on asymptotic complexity  With O( log n) span for partition, the total span for quicksort isO( log n) + 1T(n/2) = O( log 2 n) Partition all the data into: A.The elements less than the pivot B.The pivot C.The elements greater than the pivot

Example 20  Step 1: pick pivot as median of three Steps 2a and 2a (combinable): filter less than, then filter greater than into a second array Step 3: Two recursive sorts in parallel – Can sort back into original array (like in mergesort)

Now mergesort 21 Recall mergesort: sequential, not-in-place, worst-case O(n log n) Best / expected case work 1.Sort left half and right half 2T(n/2) 2.Merge results O(n) Just like quicksort, doing the two recursive sorts in parallel changes the recurrence for the span to O ( n ) + 1T( n /2) = O ( n ) Again, parallelism is O ( log n ) To do better we need to parallelize the merge – The trick won’t use parallel prefix this time

Parallelizing the merge 22 Need to merge two sorted subarrays (may not have the same size) Idea: Recursively divide subarrays in half, merge halves in parallel Suppose the larger subarray has n elements. In parallel, Pick the median element of the larger array (here 6) in constant time In the other array, use binary search to find the first element greater than or equal to that median (here 7) Merge, in parallel, half the larger array (from the median onward) with the upper part of the shorter array Merge, in parallel, the lower part of the larger array with the lower part of the shorter array

Parallelizing the merge

Parallelizing the merge Get median of bigger half: O(1) to compute middle index

Parallelizing the merge Get median of bigger half: O(1) to compute middle index 2.Find how to split the smaller half at the same value as the left-half split: O( log n) to do binary search on the sorted small half

Parallelizing the merge Get median of bigger half: O(1) to compute middle index 2.Find how to split the smaller half at the same value as the left-half split: O( log n) to do binary search on the sorted small half 3.Size of two sub-merges conceptually splits output array: O(1)

Parallelizing the merge Get median of bigger half: O(1) to compute middle index 2.Find how to split the smaller half at the same value as the left-half split: O( log n) to do binary search on the sorted small half 3.Size of two sub-merges conceptually splits output array: O(1) 4.Do two submerges in parallel lo hi

The Recursion When we do each merge in parallel, we split the bigger one in half and use binary search to split the smaller one 7689

Analysis 29  Sequential recurrence for mergesort: T(n) = 2T(n/2) + O(n) which is O(n log n)  Doing the two recursive calls in parallel but a sequential merge: work: same as sequential span: T(n)=1T(n/2)+O(n) which is O(n) For the parallel merge step of n elements (work not shown) it turns out to be (just for the merge)  Span O( log 2 n)  Work O(n) So for mergesort with parallel merge overall:  Span is T(n) = 1T(n/2) + O( log 2 n), which is O( log 3 n)  Work is T(n) = 2T(n/2) + O(n), which is O(n log n) So parallelism (work / span) is O(n / log 2 n)  Not quite as good as quicksort, but it is a worst-case guarantee (unlike quicksort)  And as always this is just the asymptotic result