Parallel Algorithms (chap. 30, 1st edition)

Slides:



Advertisements
Similar presentations
Parallel List Ranking Advanced Algorithms & Data Structures Lecture Theme 17 Prof. Dr. Th. Ottmann Summer Semester 2006.
Advertisements

1 Parallel Algorithms (chap. 30, 1 st edition) Parallel: perform more than one operation at a time. PRAM model: Parallel Random Access Model. p0p0 p1p1.
Parallel Algorithms.
PRAM Algorithms Sathish Vadhiyar. PRAM Model - Introduction Parallel Random Access Machine Allows parallel-algorithm designers to treat processing power.
Optimal PRAM algorithms: Efficiency of concurrent writing “Computer science is no more about computers than astronomy is about telescopes.” Edsger Dijkstra.
Lecture 3: Parallel Algorithm Design
Comp 122, Spring 2004 Binary Search Trees. btrees - 2 Comp 122, Spring 2004 Binary Trees  Recursive definition 1.An empty tree is a binary tree 2.A node.
Advanced Algorithms Piyush Kumar (Lecture 12: Parallel Algorithms) Welcome to COT5405 Courtesy Baker 05.
PRAM (Parallel Random Access Machine)
Advanced Topics in Algorithms and Data Structures Classification of the PRAM model In the PRAM model, processors communicate by reading from and writing.
PRAM Models Advanced Algorithms & Data Structures Lecture Theme 13 Prof. Dr. Th. Ottmann Summer Semester 2006.
Parallel Prefix Computation Advanced Algorithms & Data Structures Lecture Theme 14 Prof. Dr. Th. Ottmann Summer Semester 2006.
Simulating a CRCW algorithm with an EREW algorithm Efficient Parallel Algorithms COMP308.
Uzi Vishkin.  Introduction  Objective  Model of Parallel Computation ▪ Work Depth Model ( ~ PRAM) ▪ Informal Work Depth Model  PRAM Model  Technique:
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu.
Advanced Topics in Algorithms and Data Structures 1 Lecture 4 : Accelerated Cascading and Parallel List Ranking We will first discuss a technique called.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2001 Lecture 9 Tuesday, 11/20/01 Parallel Algorithms Chapters 28,
Advanced Topics in Algorithms and Data Structures Page 1 An overview of lecture 3 A simple parallel algorithm for computing parallel prefix. A parallel.
The Euler-tour technique
Proving correctness. Proof based on loop invariants  an assertion which is satisfied before each iteration of a loop  At termination the loop invariant.
Parallel Computers 1 The PRAM Model for Parallel Computation (Chapter 2) References:[2, Akl, Ch 2], [3, Quinn, Ch 2], from references listed for Chapter.
1 Lecture 3 PRAM Algorithms Parallel Computing Fall 2008.
Fall 2008Paradigms for Parallel Algorithms1 Paradigms for Parallel Algorithms.
Advanced Topics in Algorithms and Data Structures 1 Two parallel list ranking algorithms An O (log n ) time and O ( n log n ) work list ranking algorithm.
Basic PRAM algorithms Problem 1. Min of n numbers Problem 2. Computing a position of the first one in the sequence of 0’s and 1’s.
Simulating a CRCW algorithm with an EREW algorithm Lecture 4 Efficient Parallel Algorithms COMP308.
Optimizing Compilers for Modern Architectures Dependence: Theory and Practice Allen and Kennedy, Chapter 2.
RAM and Parallel RAM (PRAM). Why models? What is a machine model? – A abstraction describes the operation of a machine. – Allowing to associate a value.
1 Lecture 2: Parallel computational models. 2  Turing machine  RAM (Figure )  Logic circuit model RAM (Random Access Machine) Operations supposed to.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 8, 2005 Session 8.
Parallel Algorithms Sorting and more. Keep hardware in mind When considering ‘parallel’ algorithms, – We have to have an understanding of the hardware.
Parallel and Distributed Algorithms Eric Vidal Reference: R. Johnsonbaugh and M. Schaefer, Algorithms (International Edition) Pearson Education.
1 PRAM Algorithms Sums Prefix Sums by Doubling List Ranking.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 3, 2005 Session 7.
Parallel Algorithms. Parallel Models u Hypercube u Butterfly u Fully Connected u Other Networks u Shared Memory v.s. Distributed Memory u SIMD v.s. MIMD.
IS 2610: Data Structures Recursion, Divide and conquer Dynamic programming, Feb 2, 2004.
Fall 2008Simple Parallel Algorithms1. Fall 2008Simple Parallel Algorithms2 Scalar Product of Two Vectors Let a = (a 1, a 2, …, a n ); b = (b 1, b 2, …,
CSC317 1 So far so good, but can we do better? Yes, cheaper by halves... orkbook/cheaperbyhalf.html.
3/12/2013Computer Engg, IIT(BHU)1 PRAM ALGORITHMS-3.
3/12/2013Computer Engg, IIT(BHU)1 PRAM ALGORITHMS-1.
PRAM and Parallel Computing
Priority Queues A priority queue is an ADT where:
Lecture 3: Parallel Algorithm Design
Math/CSE 1019C: Discrete Mathematics for Computer Science Fall 2012
PRAM Model for Parallel Computation
Lecture 2: Parallel computational models
Heapsort.
Parallel computation models
PRAM Algorithms.
Red-Black Trees Motivations
PRAM Model for Parallel Computation
CHAPTER 30 (in old edition) Parallel Algorithms
CS 583 Analysis of Algorithms
Ch 6: Heapsort Ming-Te Chi
Parallel and Distributed Algorithms
Proving correctness.
Lecture 5 PRAM Algorithms (cont.)
Lecture 29 Heaps Chapter 12 of textbook Concept of heaps Binary heaps
Basic Graph Algorithms
Data Parallel Algorithms
Linked List and Selection Sort
Unit –VIII PRAM Algorithms.
Heaps © 2014 Goodrich, Tamassia, Goldwasser Heaps Heaps
Heapsort Sorting in place
Algorithm Design Techniques Greedy Approach vs Dynamic Programming
Module 6: Introduction to Parallel Computing
CS 583 Analysis of Algorithms
List Ranking Moon Jung Chung
Parallel Sorting Algorithms
nalysis of lgorithms A A Universidad Nacional de Colombia
Presentation transcript:

Parallel Algorithms (chap. 30, 1st edition) Parallel: perform more than one operation at a time. PRAM model: Parallel Random Access Model. Shared memory p0 Multiple processors connected to a shared memory. Each processor access any location in unit time. All processors can access memory in parallel. All processors can perform operations in parallel. p1 pn-1

Concurrent vs. Exclusive Access Four models EREW: exclusive read and exclusive write CREW: concurrent read and exclusive write ERCW: exclusive read and concurrent write CRCW: concurrent read and concurrent write Handling write conflicts Common-write model: only if they write the same value. Arbitrary-write model: an arbitrary one succeeds. Priority-write model: the one with smallest index succeeds. EREW and CRCW are most popular.

Synchronization and Control A most important and complicated issue Suppose all processors are inherently tightly synchronized: All processors execute the same statements at the same time No race among processors, i.e, same pace. Termination control of a parallel loop: Depend on the state of all processors Can be tested in O(1) time.

Pointer Jumping –list ranking Given a single linked list L with n objects, compute, for each object in L, its distance from the end of the list. Formally: suppose next is the pointer field d[i]= 0 if next[i]=nil d[next[i]]+1 if next[i]nil Serial algorithm: (n).

List ranking –EREW algorithm LIST-RANK(L) (in O(lg n) time) for each processor i, in parallel do if next[i]=nil then d[i]0 else d[i]1 while there exists an object i such that next[i]nil do for each processor i, in parallel do if next[i]nil then d[i] d[i]+ d[next[i]] next[i] next[next[i]]

List-ranking –EREW algorithm 1 3 4 6 5 (a) 3 4 6 1 5 (b) 2 2 2 2 1 3 4 6 1 5 (c) 4 3 2 1 3 4 6 1 5 (d) 5 4 3 2 1

List ranking –correctness of EREW algorithm Loop invariant: for each i, the sum of d values in the sublist headed by i is the correct distance from i to the end of the original list L. Parallel memory must be synchronized: the reads on the right must occur before the wirtes on the left. Moreover, read d[i] and then read d[next[i]]. An EREW algorithm: every read and write is exclusive. For an object i, its processor reads d[i], and then its precedent processor reads its d[i]. Writes are all in distinct locations.

LIST ranking EREW algorithm running time O(lg n): The initialization for loop runs in O(1). Each iteration of while loop runs in O(1). There are exactly lg n iterations: Each iteration transforms each list into two interleaved lists: one consisting of objects in even positions, and the other odd positions. Thus, each iteration double the number of lists but halves their lengths. The termination test in line 5 runs in O(1). Define work =#processors running time. O(n lg n).

Parallel prefix on a list A prefix computation is defined as: Input: <x1, x2, …, xn> Binary associative operation  Output:<y1, y2, …, yn> Such that: y1= x1 yk= yk-1 xk for k=2,3, …,n , i.e, yk=  x1  x2 … xk . Suppose <x1, x2, …, xn> are stored orderly in a list. Define notation: [i,j]= xi  xi+1 … xj

Prefix computation LIST-PREFIX(L) for each processor i, in parallel do y[i] x[i] while there exists an object i such that next[i]nil do for each processor i, in parallel do if next[i]nil then y[next[i]] y[i]  y[next[i]] next[i] next[next[i]]

Prefix computation –EREW algorithm [1,1] x1 [2,2] x2 [3,3] [4,4] x4 [5,5] x5 [6,6] x6 (a) x3 x1 x2 x5 x6 x3 x4 (b) [1,1] [1,2] [2,3] [3,4] [4,5] [5,6] x1 x2 x5 x6 x3 (c) [1,1] [1,2] [1,3] [1,4] [2,5] [3,6] x1 x2 x5 x6 x3 (d) [1,1] [1,2] [1,3] [1,4] [1,5] [1,6]

Find root –CREW algorithm Suppose a forest of binary trees, each node i has a pointer parent[i]. Find the identity of the tree of each node. Assume that each node is associated a processor. Assume that each node i has a field root[i].

Find-roots –CREW algorithm FIND-ROOTS(F) for each processor i, in parallel do if parent[i] = nil then root[i]i while there exist a node i such that parent[i]  nil do for each processor i, in parallel do if parent[i]  nil then root[i]  root[parent[i]] parent[i]  parent[parent[i]]

Find root –CREW algorithm Running time: O(lg d), where d is the height of maximum-depth tree in the forest. All the writes are exclusive But the read in line 7 is concurrent, since several nodes may have same node as parent. See figure 30.5.

Find roots –CREW vs. EREW How fast can n nodes in a forest determine their roots using only exclusive read? (lg n) Argument: when exclusive read, a given peace of information can only be copied to one other memory location in each step, thus the number of locations containing a given piece of information at most doubles at each step. Looking at a forest with one tree of n nodes, the root identity is stored in one place initially. After the first step, it is stored in at most two places; after the second step, it is Stored in at most four places, …, so need lg n steps for it to be stored at n places. So CREW: O(lg d) and EREW: (lg n). If d=2(lg n), CREW outperforms any EREW algorithm. If d=(lg n), then CREW runs in O(lg lg n), and EREW is much slower.

Find maximum – CRCW algorithm Given n elements A[0,n-1], find the maximum. Suppose n2 processors, each processor (i,j) compare A[i] and A[j], for 0 i, j n-1. FAST-MAX(A) nlength[A] for i 0 to n-1, in parallel do m[i] true for i 0 to n-1 and j 0 to n-1, in parallel do if A[i] < A[j] then m[i] false do if m[i] =true then max  A[i] return max 5 6 9 2 9 m 5 F T T F T F 6 F F T F T F 9 F F F F F T 2 T T T F T F A[j] A[i] max=9 The running time is O(1). Note: there may be multiple maximum values, so their processors Will write to max concurrently. Its work = n2  O(1) =O(n2).

Find maximum –CRCW vs. EREW If find maximum using EREW, then (lg n). Argument: consider how many elements “think” that they might be the maximum. First, n, After first step, n/2, After second step n/4. …, each step, halve. Moreover, CREW takes (lg n).

Stimulating CRCW with EREW Theorem: A p-processor CRCW algorithm can be no more than O(lg p) times faster than a best p-processor EREW algorithm for the same problem. Proof: each step of CRCW can be simulated by O(lg p) computations of EREW. Suppose concurrent write: CRCW pi write data xi to location li, (li may be same for multiple pi ‘s). Corresponding EREW pi write (li, xi) to a location A[i], (different A[i]’s) so exclusive write. Sort all (li, xi)’s by li’s, same locations are brought together. in O(lg p). Each EREW pi compares A[i]= (lj, xj), and A[i-1]= (lk, xk). If lj lk or i=0, then EREW pi writes xj to lj. (exclusive write). See figure 30.7.

CRCW vs. EREW CRCW: Some says: easier to program and more faster. Others say: The hardware to CRCW is slower than EREW. And One can not find maximum in O(1). Still others say: either EREW or CRCW is wrong. Processors must be connected by a network, and only be able to communicate with other via the network, so network should be part of the model.