Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.

Slides:



Advertisements
Similar presentations
Practical techniques & Examples
Advertisements

The Sieve of Eratosthenes
Parallel Programming in C with MPI and OpenMP
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Sahalu JunaiduICS 573: High Performance ComputingMPI.1 MPI Tutorial Examples Based on Michael Quinn’s Textbook Example 1: Circuit Satisfiability Check.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Chapter 6 Floyd’s Algorithm. 2 Chapter Objectives Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming with MPI and OpenMP Michael J. Quinn.
1 CS 668: Lecture 2 An Introduction to MPI Fred Annexstein University of Cincinnati CS668: Parallel Computing Fall 2007 CC Some.
Chapter 6 Floyd’s Algorithm. 2 Chapter Objectives Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 8 Matrix-vector Multiplication.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Parallel Programming in C with MPI and OpenMP
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Parallel Programming in C with MPI and OpenMP
L15: Putting it together: N-body (Ch. 6) October 30, 2012.
INTEL CONFIDENTIAL Predicting Parallel Performance Introduction to Parallel Programming – Part 10.
1 Collective Communications. 2 Overview  All processes in a group participate in communication, by calling the same function with matching arguments.
ECE 1747H : Parallel Programming Message Passing (MPI)
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
PP Lab MPI programming VI. Program 1 Break up a long vector into subvectors of equal length. Distribute subvectors to processes. Let them compute the.
Sieve of Eratosthenes by Fola Olagbemi. Outline What is the sieve of Eratosthenes? Algorithm used Parallelizing the algorithm Data decomposition options.
1. Create list of unmarked natural numbers 2, 3, …, n 2. k  2 3. Repeat: (a) Mark all multiples of k between k 2 and n (b) k  smallest unmarked number.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Parallel Programming with MPI By, Santosh K Jena..
Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Task/ChannelMessage-passing TaskProcess Explicit channelsMessage communication.
MPI and OpenMP.
An Introduction to MPI (message passing interface)
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
Message Passing Interface Using resources from
Lecture 3: Today’s topics MPI Broadcast (Quinn Chapter 5) –Sieve of Eratosthenes MPI Send and Receive calls (Quinn Chapter 6) –Floyd’s algorithm Other.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
Chapter 4 Message-Passing Programming. Learning Objectives Understanding how MPI programs execute Understanding how MPI programs execute Familiarity with.
CS 4410 – Parallel Computing 1 Chap 9 CS 4410 – Parallel Computing Dr. Dave Gallagher Chap 9 Manager Worker.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Parallel Programming By J. H. Wang May 2, 2017.
CS4402 – Parallel Computing
Sieve of Eratosthenes.
CS 668: Lecture 3 An Introduction to MPI
Parallel Programming in C with MPI and OpenMP
Sieve of Eratosthenes.
Parallel Programming in C with MPI and OpenMP
Parallel Programming with MPI and OpenMP
Lecture 14: Inter-process Communication
CSCE569 Parallel Computing
Parallel Programming in C with MPI and OpenMP
CSCE569 Parallel Computing
May 19 Lecture Outline Introduce MPI functionality
CSCE569 Parallel Computing
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Parallel Programming in C with MPI and OpenMP
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Presentation transcript:

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 5 The Sieve of Eratosthenes

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter Objectives Analysis of block allocation schemes Analysis of block allocation schemes Function MPI_Bcast Function MPI_Bcast Performance enhancements Performance enhancements

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Outline Sequential algorithm Sequential algorithm Sources of parallelism Sources of parallelism Data decomposition options Data decomposition options Parallel algorithm development, analysis Parallel algorithm development, analysis MPI program MPI program Benchmarking Benchmarking Optimizations Optimizations

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Sequential Algorithm Complexity:  (n ln ln n)

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Pseudocode 1. Create list of unmarked natural numbers 2, 3, …, n 2. k  2 3. Repeat (a) Mark all multiples of k between k 2 and n (b) k  smallest unmarked number > k until k 2 > n 4. The unmarked numbers are primes

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Sources of Parallelism Domain decomposition Domain decomposition  Divide data into pieces  Associate computational steps with data One primitive task per array element One primitive task per array element

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Making 3(a) Parallel Mark all multiples of k between k 2 and n  for all j where k 2  j  n do if j mod k = 0 then mark j (it is not a prime) endif endfor

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Making 3(b) Parallel Find smallest unmarked number > k  Min-reduction (to find smallest unmarked number > k) Broadcast (to get result to all tasks)

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Agglomeration Goals Consolidate tasks Consolidate tasks Reduce communication cost Reduce communication cost Balance computations among processes Balance computations among processes

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Data Decomposition Options Interleaved (cyclic) Interleaved (cyclic)  Easy to determine “owner” of each index  Leads to load imbalance for this problem Block Block  Balances loads  More complicated to determine owner if n not a multiple of p

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Block Decomposition Options Want to balance workload when n not a multiple of p Want to balance workload when n not a multiple of p Each process gets either  n/p  or  n/p  elements Each process gets either  n/p  or  n/p  elements Seek simple expressions Seek simple expressions  Find low, high indices given an owner  Find owner given an index

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Method #1 Let r = n mod p Let r = n mod p If r = 0, all blocks have same size If r = 0, all blocks have same size Else Else  First r blocks have size  n/p   Remaining p-r blocks have size  n/p 

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Examples 17 elements divided among 7 processes 17 elements divided among 5 processes 17 elements divided among 3 processes

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Method #1 Calculations First element controlled by process i First element controlled by process i Last element controlled by process i Last element controlled by process i Process controlling element j Process controlling element j

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Method #2 Scatters larger blocks among processes Scatters larger blocks among processes First element controlled by process i First element controlled by process i Last element controlled by process i Last element controlled by process i Process controlling element j Process controlling element j

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Examples 17 elements divided among 7 processes 17 elements divided among 5 processes 17 elements divided among 3 processes

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Comparing Methods Operations Method 1 Method 2 Low index 42 High index 64 Owner74 Assuming no operations for “floor” function Our choice

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Pop Quiz Illustrate how block decomposition method #2 would divide 13 elements among 5 processes. Illustrate how block decomposition method #2 would divide 13 elements among 5 processes. 13(0)/ 5 = 0 13(1)/5 = 2 13(2)/ 5 = 5 13(3)/ 5 = 7 13(4)/ 5 = 10

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Block Decomposition Macros #define BLOCK_LOW(id,p,n) ((id)*(n)/(p)) #define BLOCK_HIGH(id,p,n) \ (BLOCK_LOW((id)+1,p,n)-1) #define BLOCK_SIZE(id,p,n) \ (BLOCK_LOW((id)+1)-BLOCK_LOW(id)) #define BLOCK_OWNER(index,p,n) \ (((p)*(index)+1)-1)/(n))

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Local vs. Global Indices L 0 1 L L 0 1 L G 0 1G G 5 6 G 7 8 9G

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Looping over Elements Sequential program Sequential program for (i = 0; i < n; i++) { …} Parallel program Parallel program size = BLOCK_SIZE (id,p,n); for (i = 0; i < size; i++) { gi = i + BLOCK_LOW(id,p,n); gi = i + BLOCK_LOW(id,p,n);} Index i on this process… …takes place of sequential program’s index gi

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Decomposition Affects Implementation Largest prime used to sieve is  n Largest prime used to sieve is  n First process has  n/p  elements First process has  n/p  elements It has all sieving primes if p <  n It has all sieving primes if p <  n First process always broadcasts next sieving prime First process always broadcasts next sieving prime No reduction step needed No reduction step needed

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Fast Marking Block decomposition allows same marking as sequential algorithm: Block decomposition allows same marking as sequential algorithm: j, j + k, j + 2k, j + 3k, … instead of for all j in block if j mod k = 0 then mark j (it is not a prime)

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Algorithm Development 1. Create list of unmarked natural numbers 2, 3, …, n 2. k  2 3. Repeat (a) Mark all multiples of k between k 2 and n (b) k  smallest unmarked number > k until k 2 > m 4. The unmarked numbers are primes Each process creates its share of list Each process does this Each process marks its share of list Process 0 only (c) Process 0 broadcasts k to rest of processes 5. Reduction to determine number of primes

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Function MPI_Bcast int MPI_Bcast ( void *buffer, /* Addr of 1st element */ int count, /* # elements to broadcast */ MPI_Datatype datatype, /* Type of elements */ int root, /* ID of root process */ MPI_Comm comm) /* Communicator */ MPI_Bcast (&k, 1, MPI_INT, 0, MPI_COMM_WORLD);

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Task/Channel Graph

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Analysis  is time needed to mark a cell  is time needed to mark a cell Sequential execution time:  n ln ln n Sequential execution time:  n ln ln n Number of broadcasts:  n / ln  n Number of broadcasts:  n / ln  n Broadcast time:  log p  Broadcast time:  log p  Expected execution time: Expected execution time:

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Code (1/4) #include #include "MyMPI.h" #define MIN(a,b) ((a)<(b)?(a):(b)) int main (int argc, char *argv[]) {... MPI_Init (&argc, &argv); MPI_Barrier(MPI_COMM_WORLD); elapsed_time = -MPI_Wtime(); MPI_Comm_rank (MPI_COMM_WORLD, &id); MPI_Comm_size (MPI_COMM_WORLD, &p); if (argc != 2) { if (!id) printf ("Command line: %s \n", argv[0]); MPI_Finalize(); exit (1); }

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Code (2/4) n = atoi(argv[1]); low_value = 2 + BLOCK_LOW(id,p,n-1); high_value = 2 + BLOCK_HIGH(id,p,n-1); size = BLOCK_SIZE(id,p,n-1); proc0_size = (n-1)/p; if ((2 + proc0_size) < (int) sqrt((double) n)) { if (!id) printf ("Too many processes\n"); MPI_Finalize(); exit (1); } marked = (char *) malloc (size); if (marked == NULL) { printf ("Cannot allocate enough memory\n"); MPI_Finalize(); exit (1); }

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Code (3/4) for (i = 0; i < size; i++) marked[i] = 0; if (!id) index = 0; prime = 2; do { if (prime * prime > low_value) first = prime * prime - low_value; else { if (!(low_value % prime)) first = 0; else first = prime - (low_value % prime); } for (i = first; i < size; i += prime) marked[i] = 1; if (!id) { while (marked[++index]); prime = index + 2; } MPI_Bcast (&prime, 1, MPI_INT, 0, MPI_COMM_WORLD); } while (prime * prime <= n);

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Code (4/4) count = 0; for (i = 0; i < size; i++) if (!marked[i]) count++; MPI_Reduce (&count, &global_count, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); elapsed_time += MPI_Wtime(); if (!id) { printf ("%d primes are less than or equal to %d\n", global_count, n); printf ("Total elapsed time: %10.6f\n", elapsed_time); } MPI_Finalize (); return 0; }

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Benchmarking Execute sequential algorithm Execute sequential algorithm Determine  = nanosec Determine  = nanosec Execute series of broadcasts Execute series of broadcasts Determine = 250  sec Determine = 250  sec

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Execution Times (sec) ProcessorsPredicted Actual (sec)

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Improvements Delete even integers Delete even integers  Cuts number of computations in half  Frees storage for larger values of n Each process finds own sieving primes Each process finds own sieving primes  Replicating computation of primes to  n  Eliminates broadcast step Reorganize loops Reorganize loops  Increases cache hit rate

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Reorganize Loops Cache hit rate Lower Higher

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Comparing 4 Versions Procs Sieve 1 Sieve 2 Sieve 3 Sieve fold improvement 7-fold improvement

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Summary Sieve of Eratosthenes: parallel design uses domain decomposition Sieve of Eratosthenes: parallel design uses domain decomposition Compared two block distributions Compared two block distributions  Chose one with simpler formulas Introduced MPI_Bcast Introduced MPI_Bcast Optimizations reveal importance of maximizing single-processor performance Optimizations reveal importance of maximizing single-processor performance