Auburn University

Slides:



Advertisements
Similar presentations
Parallel List Ranking Advanced Algorithms & Data Structures Lecture Theme 17 Prof. Dr. Th. Ottmann Summer Semester 2006.
Advertisements

Lecture 3: Parallel Algorithm Design
Starting Parallel Algorithm Design David Monismith Based on notes from Introduction to Parallel Programming 2 nd Edition by Grama, Gupta, Karypis, and.
Parallel Algorithm Design
Lecture 7: Task Partitioning and Mapping to Processes Shantanu Dutt ECE Dept., UIC.
Development of Parallel Simulator for Wireless WCDMA Network Hong Zhang Communication lab of HUT.
Parallel Processing (CS453). How does one decompose a task into various subtasks? While there is no single recipe that works for all problems, we present.
Principles of Parallel Algorithm Design
Principles of Parallel Algorithm Design Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text “Introduction to Parallel Computing”,
1 Friday, September 29, 2006 If all you have is a hammer, then everything looks like a nail. -Anonymous.
Pipelined Computations Divide a problem into a series of tasks A processor completes a task sequentially and pipes the results to the next processor Pipelining.
Principles of Parallel Algorithm Design
Advanced Topics in Algorithms and Data Structures 1 Two parallel list ranking algorithms An O (log n ) time and O ( n log n ) work list ranking algorithm.
Parallel Processing (CS526) Spring 2012(Week 4).  Parallelism from two perspectives: ◦ Platform  Parallel Hardware Architecture  Parallel Communication.
Basic Communication Operations Based on Chapter 4 of Introduction to Parallel Computing by Ananth Grama, Anshul Gupta, George Karypis and Vipin Kumar These.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Fundamentals of Algorithms MCS - 2 Lecture # 7
CS453 Lecture 3.  A sequential algorithm is evaluated by its runtime (in general, asymptotic runtime as a function of input size).  The asymptotic runtime.
Design Issues. How to parallelize  Task decomposition  Data decomposition  Dataflow decomposition Jaruloj Chongstitvatana 2 Parallel Programming: Parallelization.
INTRODUCTION TO PARALLEL ALGORITHMS. Objective  Introduction to Parallel Algorithms Tasks and Decomposition Processes and Mapping Processes Versus Processors.
Lecture 15- Parallel Databases (continued) Advanced Databases Masood Niazi Torshiz Islamic Azad University- Mashhad Branch
Principles of Parallel Algorithm Design Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar To accompany the text “Introduction to Parallel Computing”,
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Basic Communication Operations Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar Reduced slides for CSCE 3030 To accompany the text ``Introduction.
CDP Tutorial 3 Basics of Parallel Algorithm Design uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison.
Computer Sciences Department1.  Property 1: each node can have up to two successor nodes (children)  The predecessor node of a node is called its.
Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar
CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs 2-3:20 p.m. Principles of Parallel Algorithm Design (reading Chp.
Paper_topic: Parallel Matrix Multiplication using Vertical Data.
3/12/2013Computer Engg, IIT(BHU)1 INTRODUCTION-1.
Uses some of the slides for chapters 3 and 5 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003.
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dr. Xiao Qin Auburn University
COMP7330/7336 Advanced Parallel and Distributed Computing Task Partitioning Dynamic Mapping Dr. Xiao Qin Auburn University
COMP8330/7330/7336 Advanced Parallel and Distributed Computing Decomposition and Parallel Tasks (cont.) Dr. Xiao Qin Auburn University
CPE 779 Parallel Computing - Spring Creating and Using Threads Based on Slides by Katherine Yelick
COMP7330/7336 Advanced Parallel and Distributed Computing Thread Basics Dr. Xiao Qin Auburn University
Auburn University
Auburn University
All-pairs Shortest paths Transitive Closure
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Exploratory Decomposition Dr. Xiao Qin Auburn.
Lecture 3: Parallel Algorithm Design
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Odd-Even Sort Implementation Dr. Xiao Qin.
Parallel Tasks Decomposition
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Principles of Message-Passing Programming.
Parallel Programming By J. H. Wang May 2, 2017.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing MapReduce - Introduction Dr. Xiao Qin Auburn.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Mapping Techniques Dr. Xiao Qin Auburn University.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Parallel Odd-Even Sort Algorithm Dr. Xiao.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Data Partition Dr. Xiao Qin Auburn University.
Advanced Design and Analysis Techniques
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Communication Costs (cont.) Dr. Xiao.
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Principles of Message-Passing Programming.
Algorithm An algorithm is a finite set of steps required to solve a problem. An algorithm must have following properties: Input: An algorithm must have.
Principles of Parallel Algorithm Design
Introduction to Parallel Computing by Grama, Gupta, Karypis, Kumar
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
Principles of Parallel Algorithm Design
ECE 667 Synthesis and Verification of Digital Systems
Parallelismo.
COMP60621 Fundamentals of Parallel and Distributed Systems
Principles of Parallel Algorithm Design
Principles of Parallel Algorithm Design
COMP108 Algorithmic Foundations Dynamic Programming
Principles of Parallel Algorithm Design
Principles of Parallel Algorithm Design
Principles of Parallel Algorithm Design
To accompany the text “Introduction to Parallel Computing”,
Parallel Programming in C with MPI and OpenMP
Chapter 2 Lin and Dyer & MapReduce Basics Chapter 2 Lin and Dyer &
COMP60611 Fundamentals of Parallel and Distributed Systems
Presentation transcript:

Auburn University http://www.eng.auburn.edu/~xqin COMP7330/7336 Advanced Parallel and Distributed Computing Recursive Decomposition Dr. Xiao Qin Auburn University http://www.eng.auburn.edu/~xqin xqin@auburn.edu TBC=13 Slides are adopted from Drs. Ananth Grama, Anshul Gupta, George Karypis, and Vipin Kumar

Review: Processes and Mapping In general, the number of tasks in a decomposition exceeds the number of processing elements available. A parallel algorithm must provide a mapping of tasks to processes.

Review: Processes and Mapping We refer to the mapping as being from tasks to processes, as opposed to processors. Typical programming APIs do not allow easy binding of tasks to physical processors. We aggregate tasks into processes Rely on the system to map these processes to physical processors. A process (not in the UNIX sense of a process) is a collection of tasks and associated data.

Processes and Mapping Appropriate mapping of tasks to processes is critical to the parallel performance of an algorithm. Mappings are determined by both the task dependency and task interaction graphs. Task dependency graphs can be used to ensure that work is equally spread across all processes at any point (minimum idling and optimal load balance). Task interaction graphs can be used to make sure that processes need minimum interaction with other processes (minimum communication).

Processes and Mapping An appropriate mapping must minimize parallel execution time by (How?) Mapping independent tasks to different processes. Assigning tasks on critical path to processes as soon as they become available. Minimizing interaction between processes by mapping tasks with dense interactions to the same process. Note: These criteria often conflict with each other. For example, a decomposition into one task (or no decomposition at all) minimizes interaction but does not result in a speedup at all! Can you think of other such conflicting cases? Conflicting?

Processes and Mapping: Example Mapping tasks in the database query decomposition to processes. (How?) Viewing the dependency graph in terms of levels (no two nodes in a level have dependencies). Tasks within a single level are then assigned to different processes.

Decomposition Techniques So how does one decompose a task into various subtasks? • recursive decomposition • data decomposition • exploratory decomposition • speculative decomposition While there is no single recipe that works for all problems, we present a set of commonly used techniques that apply to broad classes of problems. These include:

Recursive Decomposition Generally suited to problems that are solved using the divide-and-conquer strategy. A given problem is first decomposed into a set of sub-problems. These sub-problems are recursively decomposed further until a desired granularity is reached. Give an example. Can you give an example?

Recursive Decomposition: Example A classic example of a divide-and-conquer algorithm on which we can apply recursive decomposition is Quicksort. Once the list has been partitioned around the pivot, each sublist can be processed concurrently (i.e., each sublist represents an independent subtask). This can be repeated recursively.

Recursive Decomposition: Example Finding the minimum number in a given list: a divide-and-conquer algorithm. Start with a simple serial loop for computing the minimum entry in a given list: 1. procedure SERIAL_MIN (A, n) 2. begin 3. min = A[0]; 4. for i := 1 to n − 1 do 5. if (A[i] < min) min := A[i]; 6. endfor; 7. return min; 8. end SERIAL_MIN The problem of finding the minimum number in a given list (or indeed any other associative operation such as sum, AND, etc.) can be fashioned as a divide-and-conquer algorithm. The following algorithm illustrates this. We first start with a simple serial loop for computing the minimum entry in a given list:

Recursive Decomposition: Example How can you rewrite the loop? 1. procedure RECURSIVE_MIN (A, n) 2. begin 3. if ( n = 1 ) then 4. min := A [0] ; 5. else 6. lmin := RECURSIVE_MIN ( A, n/2 ); 7. rmin := RECURSIVE_MIN ( &(A[n/2]), n - n/2 ); 8. if (lmin < rmin) then 9. min := lmin; 10. else 11. min := rmin; 12. endelse; 13. endelse; 14. return min; 15. end RECURSIVE_MIN Slide 25.

Recursive Decomposition: Example The code in the previous foil can be decomposed naturally using a recursive decomposition strategy. We illustrate this with the following example of finding the minimum number in the set {4, 9, 1, 7, 8, 11, 2, 12}. What is the corresponding task dependency graph associated with this computation?

Data Decomposition Identify data on which computations are performed. Partition data across various tasks. This partitioning induces a decomposition of the problem. Data can be partitioned in various ways - this critically impacts performance of a parallel algorithm.

Data Decomposition: Output Data Decomposition Often, each element of the output can be computed independently of others (but simply as a function of the input). A partition of the output across tasks decomposes the problem naturally.

Output Data Decomposition: Example Consider the problem of multiplying two n x n matrices A and B to yield matrix C. The output matrix C can be partitioned into four tasks as follows: Task 1: Task 2: Task 3: Task 4:

Output Data Decomposition: Example A partitioning of output data does not result in a unique decomposition into tasks. For the above problem, with identical output data distribution: Can you derive other decompositions? Decomposition I Decomposition II Task 1: C1,1 = A1,1 B1,1 Task 2: C1,1 = C1,1 + A1,2 B2,1 Task 3: C1,2 = A1,1 B1,2 Task 4: C1,2 = C1,2 + A1,2 B2,2 Task 5: C2,1 = A2,1 B1,1 Task 6: C2,1 = C2,1 + A2,2 B2,1 Task 7: C2,2 = A2,1 B1,2 Task 8: C2,2 = C2,2 + A2,2 B2,2 Task 3: C1,2 = A1,2 B2,2 Task 4: C1,2 = C1,2 + A1,1 B1,2 Task 5: C2,1 = A2,2 B2,1 Task 6: C2,1 = C2,1 + A2,1 B1,1

Exercise: Output Data Decomposition Consider the problem of counting the instances of given itemsets in a database of transactions. In this case, the output (itemset frequencies) can be partitioned across tasks. How?

Output Data Decomposition: Observations If the database of transactions is replicated across the processes, each task can be independently accomplished with no communication. If the database is partitioned across processes as well (for reasons of memory utilization), each task first computes partial counts. These counts are then aggregated at the appropriate task.

Input Data Partitioning Generally applicable if each output can be naturally computed as a function of the input. In many cases, this is the only natural decomposition because the output is not clearly known a-priori (e.g., the problem of finding the minimum in a list, sorting a given list, etc.). A task is associated with each input data partition. The task performs as much of the computation with its part of the data. Subsequent processing combines these partial results. In the database counting example,

Input Data Partitioning: Example The input (i.e., the transaction set) can be partitioned. Each task generates partial counts for all itemsets. These are combined subsequently for aggregate counts.

Partitioning Input and Output Data Often input and output data decomposition can be combined for a higher degree of concurrency. The transaction set (input) and itemset counts (output) can both be decomposed as follows: