Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.

Slides:



Advertisements
Similar presentations
Practical techniques & Examples
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Parallel Programming in C with MPI and OpenMP
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Chapter 4 Message-Passing Programming. 2 Outline Message-passing model Message Passing Interface (MPI) Coding MPI programs Compiling MPI programs Running.
Chapter 6 Floyd’s Algorithm. 2 Chapter Objectives Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and.
Lecture 6 Objectives Communication Complexity Analysis Collective Operations –Reduction –Binomial Trees –Gather and Scatter Operations Review Communication.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 CS 668: Lecture 2 An Introduction to MPI Fred Annexstein University of Cincinnati CS668: Parallel Computing Fall 2007 CC Some.
Chapter 6 Floyd’s Algorithm. 2 Chapter Objectives Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 8 Matrix-vector Multiplication.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Parallel Programming in C with MPI and OpenMP
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Director of Contra Costa College High Performance Computing Center
ECE 1747H : Parallel Programming Message Passing (MPI)
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Parallel Programming with MPI By, Santosh K Jena..
Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Task/ChannelMessage-passing TaskProcess Explicit channelsMessage communication.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
MPI and OpenMP.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
Lecture 3: Today’s topics MPI Broadcast (Quinn Chapter 5) –Sieve of Eratosthenes MPI Send and Receive calls (Quinn Chapter 6) –Floyd’s algorithm Other.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Chapter 4 Message-Passing Programming. Learning Objectives Understanding how MPI programs execute Understanding how MPI programs execute Familiarity with.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Introduction to MPI Programming Ganesh C.N.
Collectives Reduce Scatter Gather Many more.
MPI Basics.
CS4402 – Parallel Computing
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
Parallel Programming in C with MPI and OpenMP
Parallel Programming with MPI and OpenMP
An Introduction to Parallel Programming with MPI
Programming with MPI.
CSCE569 Parallel Computing
Lecture 14: Inter-process Communication
CSCE569 Parallel Computing
May 19 Lecture Outline Introduce MPI functionality
CSCE569 Parallel Computing
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Parallel Programming in C with MPI and OpenMP
Presentation transcript:

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 4 Message-Passing Programming

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Learning Objectives Understanding how MPI programs execute Familiarity with fundamental MPI functions

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Outline Message-passing model Message Passing Interface (MPI) Coding MPI programs Compiling MPI programs Running MPI programs Benchmarking MPI programs

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Processes Number is specified at start-up time Remains constant throughout execution of program All execute same program Each has unique ID number Alternately performs computations and communicates

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Advantages of Message-passing Model Gives programmer ability to manage the memory hierarchy Portability to many architectures Easier to create a deterministic program Simplifies debugging

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. The Message Passing Interface Late 1980s: vendors had unique libraries 1989: Parallel Virtual Machine (PVM) developed at Oak Ridge National Lab 1992: Work on MPI standard begun 1994: Version 1.0 of MPI standard 1997: Version 2.0 of MPI standard Today: MPI is dominant message passing library standard

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Circuit Satisfiability not satisfied 0

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Solution Method Circuit satisfiability is NP-complete No known algorithms to solve in polynomial time We seek all solutions We find through exhaustive search 16 inputs  65,536 combinations to test

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Partitioning: Functional Decomposition Embarrassingly parallel: No channels between tasks Embarrassingly parallel: No channels between tasks

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Agglomeration and Mapping Properties of parallel algorithm Fixed number of tasks No communications between tasks Time needed per task is variable Map tasks to processors in a cyclic fashion

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Cyclic (interleaved) Allocation Assume p processes Each process gets every p th piece of work Example: 5 processes and 12 pieces of work P 0 : 0, 5, 10 P 1 : 1, 6, 11 P 2 : 2, 7 P 3 : 3, 8 P 4 : 4, 9

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Pop Quiz Assume n pieces of work, p processes, and cyclic allocation What is the most pieces of work any process has? What is the least pieces of work any process has? How many processes have the most pieces of work?

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Summary of Program Design Program will consider all 65,536 combinations of 16 boolean inputs Combinations allocated in cyclic fashion to processes Each process examines each of its combinations If it finds a satisfiable combination, it will print it

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Include Files MPI header file #include Standard I/O header file Standard I/O header file #include

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Local Variables int main (int argc, char *argv[]) { int i; int id; /* Process rank */ int p; /* Number of processes */ void check_circuit (int, int); Include argc and argv : they are needed to initialize MPI Include argc and argv : they are needed to initialize MPI One copy of every variable for each process running this program One copy of every variable for each process running this program

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Initialize MPI First MPI function called by each process Not necessarily first executable statement Allows system to do any necessary setup MPI_Init (&argc, &argv);

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Communicators Communicator: opaque object that provides message- passing environment for processes MPI_COMM_WORLD Default communicator Includes all processes Possible to create new communicators Will do this in Chapters 8 and 9

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Communicator MPI_COMM_WORLD Communicator Processes Ranks Communicator Name

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Determine Number of Processes First argument is communicator Number of processes returned through second argument MPI_Comm_size (MPI_COMM_WORLD, &p);

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Determine Process Rank First argument is communicator Process rank (in range 0, 1, …, p-1) returned through second argument MPI_Comm_rank (MPI_COMM_WORLD, &id);

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Replication of Automatic Variables 1 id 6 p 2 6 p 0 6 p 3 6 p 4 6 p 5 6 p

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. What about External Variables? int total; int main (int argc, char *argv[]) { int i; int id; int p; … Where is variable total stored? Where is variable total stored?

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Cyclic Allocation of Work for (i = id; i < 65536; i += p) check_circuit (id, i); Parallelism is outside function check_circuit Parallelism is outside function check_circuit It can be an ordinary, sequential function It can be an ordinary, sequential function

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Shutting Down MPI Call after all other MPI library calls Allows system to free up MPI resources MPI_Finalize();

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. #include #include int main (int argc, char *argv[]) { int i; int id; int p; void check_circuit (int, int); MPI_Init (&argc, &argv); MPI_Comm_rank (MPI_COMM_WORLD, &id); MPI_Comm_size (MPI_COMM_WORLD, &p); for (i = id; i < 65536; i += p) check_circuit (id, i); printf ("Process %d is done\n", id); fflush (stdout); MPI_Finalize(); return 0; } Put fflush() after every printf()

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. /* Return 1 if 'i'th bit of 'n' is 1; 0 otherwise */ #define EXTRACT_BIT(n,i) ((n&(1<<i))?1:0) void check_circuit (int id, int z) { int v[16]; /* Each element is a bit of z */ int i; for (i = 0; i < 16; i++) v[i] = EXTRACT_BIT(z,i); if ((v[0] || v[1]) && (!v[1] || !v[3]) && (v[2] || v[3]) && (!v[3] || !v[4]) && (v[4] || !v[5]) && (v[5] || !v[6]) && (v[5] || v[6]) && (v[6] || !v[15]) && (v[7] || !v[8]) && (!v[7] || !v[13]) && (v[8] || v[9]) && (v[8] || !v[9]) && (!v[9] || !v[10]) && (v[9] || v[11]) && (v[10] || v[11]) && (v[12] || v[13]) && (v[13] || !v[14]) && (v[14] || v[15])) { printf ("%d) %d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d\n", id, v[0],v[1],v[2],v[3],v[4],v[5],v[6],v[7],v[8],v[9], v[10],v[11],v[12],v[13],v[14],v[15]); fflush (stdout); }

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Compiling MPI Programs mpicc: script to compile and link C+MPI programs Flags: same meaning as C compiler -O  optimize -o  where to put executable mpicc -O -o foo foo.c

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Running MPI Programs mpirun -np … -np  number of processes  executable …  command-line arguments

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Specifying Host Processors File.mpi-machines in home directory lists host processors in order of their use Example.mpi_machines file contents band01.cs.ppu.edu band02.cs.ppu.edu band03.cs.ppu.edu band04.cs.ppu.edu

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Enabling Remote Logins MPI needs to be able to initiate processes on other processors without supplying a password Each processor in group must list all other processors in its.rhosts file; e.g., band01.cs.ppu.edu student band02.cs.ppu.edu student band03.cs.ppu.edu student band04.cs.ppu.edu student

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Execution on 1 CPU % mpirun -np 1 sat 0) ) ) ) ) ) ) ) ) Process 0 is done

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Execution on 2 CPUs % mpirun -np 2 sat 0) ) ) ) ) ) ) ) ) Process 0 is done Process 1 is done

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Execution on 3 CPUs % mpirun -np 3 sat 0) ) ) ) ) ) ) ) ) Process 1 is done Process 2 is done Process 0 is done

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Deciphering Output Output order only partially reflects order of output events inside parallel computer If process A prints two messages, first message will appear before second If process A calls printf before process B, there is no guarantee process A’s message will appear before process B’s message

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Enhancing the Program We want to find total number of solutions Incorporate sum-reduction into program Reduction is a collective communication

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Modifications Modify function check_circuit Return 1 if circuit satisfiable with input combination Return 0 otherwise Each process keeps local count of satisfiable circuits it has found Perform reduction after for loop

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. New Declarations and Code int count; /* Local sum */ int global_count; /* Global sum */ int check_circuit (int, int); count = 0; for (i = id; i < 65536; i += p) count += check_circuit (id, i);

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Prototype of MPI_Reduce() int MPI_Reduce ( void *operand, /* addr of 1st reduction element */ void *result, /* addr of 1st reduction result */ int count, /* reductions to perform */ MPI_Datatype type, /* type of elements */ MPI_Op operator, /* reduction operator */ int root, /* process getting result(s) */ MPI_Comm comm /* communicator */ )

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. MPI_Datatype Options MPI_CHAR MPI_DOUBLE MPI_FLOAT MPI_INT MPI_LONG MPI_LONG_DOUBLE MPI_SHORT MPI_UNSIGNED_CHAR MPI_UNSIGNED MPI_UNSIGNED_LONG MPI_UNSIGNED_SHORT

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. MPI_Op Options MPI_BAND MPI_BOR MPI_BXOR MPI_LAND MPI_LOR MPI_LXOR MPI_MAX MPI_MAXLOC MPI_MIN MPI_MINLOC MPI_PROD MPI_SUM

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Our Call to MPI_Reduce() MPI_Reduce (&count, &global_count, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); Only process 0 will get the result if (!id) printf ("There are %d different solutions\n", global_count);

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Execution of Second Program % mpirun -np 3 seq2 0) ) ) ) ) ) ) ) ) Process 1 is done Process 2 is done Process 0 is done There are 9 different solutions

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Benchmarking the Program MPI_Barrier  barrier synchronization MPI_Wtick  timer resolution MPI_Wtime  current time

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Benchmarking Code double elapsed_time; … MPI_Init (&argc, &argv); MPI_Barrier (MPI_COMM_WORLD); elapsed_time = - MPI_Wtime(); … MPI_Reduce (…); elapsed_time += MPI_Wtime();

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Benchmarking Results ProcessorsTime (sec)

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Benchmarking Results

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Summary (1/2) Message-passing programming follows naturally from task/channel model Portability of message-passing programs MPI most widely adopted standard

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Summary (2/2) MPI functions introduced MPI_Init MPI_Comm_rank MPI_Comm_size MPI_Reduce MPI_Finalize MPI_Barrier MPI_Wtime MPI_Wtick

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter 6 Floyd’s Algorithm

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Chapter Objectives Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D matrices Analyzing performance when computations and communications overlap

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Outline All-pairs shortest path problem Dynamic 2-D arrays Parallel algorithm design Point-to-point communication Block row matrix I/O Analysis and benchmarking

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. All-pairs Shortest Path Problem A E B C D A B C D E ABCD E Resulting Adjacency Matrix Containing Distances

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Floyd’s Algorithm for k  0 to n-1 for i  0 to n-1 for j  0 to n-1 a[i,j]  min (a[i,j], a[i,k] + a[k,j]) endfor

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Why It Works i k j Shortest path from i to k through 0, 1, …, k-1 Shortest path from k to j through 0, 1, …, k-1 Shortest path from i to j through 0, 1, …, k-1 Computed in previous iterations

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Dynamic 1-D Array Creation A Heap Run-time Stack

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Dynamic 2-D Array Creation Heap Run-time Stack BstorageB

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Designing Parallel Algorithm Partitioning Communication Agglomeration and Mapping

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Partitioning Domain or functional decomposition? Look at pseudocode Same assignment statement executed n 3 times No functional parallelism Domain decomposition: divide matrix A into its n 2 elements

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Communication Primitive tasks Updating a[3,4] when k = 1 Iteration k: every task in row k broadcasts its value w/in task column Iteration k: every task in column k broadcasts its value w/in task row

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Agglomeration and Mapping Number of tasks: static Communication among tasks: structured Computation time per task: constant Strategy: Agglomerate tasks to minimize communication Create one task per MPI process

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Two Data Decompositions Rowwise block stripedColumnwise block striped

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Comparing Decompositions Columnwise block striped Broadcast within columns eliminated Rowwise block striped Broadcast within rows eliminated Reading matrix from file simpler Choose rowwise block striped decomposition

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. File Input File

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Pop Quiz Why don’t we input the entire file at once and then scatter its contents among the processes, allowing concurrent message passing?

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Point-to-point Communication Involves a pair of processes One process sends a message Other process receives the message

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Send/Receive Not Collective

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Function MPI_Send int MPI_Send ( void *message, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm )

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Function MPI_Recv int MPI_Recv ( void *message, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status )

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Coding Send/Receive … if (ID == j) { … Receive from I … } … if (ID == i) { … Send to j … } … Receive is before Send. Why does this work?

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. variantes de send de.html

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Inside MPI_Send and MPI_Recv Sending ProcessReceiving Process Program Memory System Buffer System Buffer Program Memory MPI_Send MPI_Recv

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Return from MPI_Send Function blocks until message buffer free Message buffer is free when Message copied to system buffer, or Message transmitted Typical scenario Message copied to system buffer Transmission overlaps computation

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Return from MPI_Recv Function blocks until message in buffer If message never arrives, function never returns

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Deadlock Deadlock: process waiting for a condition that will never become true Easy to write send/receive code that deadlocks Two processes: both receive before send Send tag doesn’t match receive tag Process sends message to wrong destination process

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Function MPI_Bcast int MPI_Bcast ( void *buffer, /* Addr of 1st element */ int count, /* # elements to broadcast */ MPI_Datatype datatype, /* Type of elements */ int root, /* ID of root process */ MPI_Comm comm) /* Communicator */ MPI_Bcast (&k, 1, MPI_INT, 0, MPI_COMM_WORLD);

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Computational Complexity Innermost loop has complexity  (n) Middle loop executed at most  n/p  times Outer loop executed n times Overall complexity  (n 3 /p)

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Communication Complexity No communication in inner loop No communication in middle loop Broadcast in outer loop — complexity is  (n log p) Overall complexity  (n 2 log p)

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Execution Time Expression (1) Iterations of outer loop Iterations of middle loop Cell update time Iterations of outer loop Messages per broadcast Message-passing time Iterations of inner loop

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Computation/communication Overlap

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Execution Time Expression (2) Iterations of outer loop Iterations of middle loop Cell update time Iterations of outer loop Messages per broadcast Message-passing time Iterations of inner loop Message transmission

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Predicted vs. Actual Performance Execution Time (sec) ProcessesPredictedActual

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Summary Two matrix decompositions Rowwise block striped Columnwise block striped Blocking send/receive functions MPI_Send MPI_Recv Overlapping communications with computations