Download presentation
Presentation is loading. Please wait.
Published byBarrie Lawrence Shaw Modified over 9 years ago
1
HPCA2001HPCA2001 1 Message Passing Interface (MPI) and Parallel Algorithm Design
2
HPCA2001HPCA2001 2 What is MPI? A message passing library specification –message-passing model –not a compiler specification –not a specific product For parallel computers, clusters and heterogeneous networks. Full-featured
3
HPCA2001HPCA2001 3 Why use MPI? (1) Message passing now mature as programming paradigm –well understood –efficient match to hardware –many applications
4
HPCA2001HPCA2001 4 Who Designed MPI ? Venders –IBM, Intel, Sun, SGI, Meiko, Cray, Convex, Ncube,….. Research Lab. –PVM, p4, Zipcode, TCGMSG, Chameleon, Express, Linda, PM (Japan RWCP), AM (Berkeley), FM (HPVM at Illinois)
5
HPCA2001HPCA2001 5 Vender-Supported MPI HP-MPIHP; Convex SPP MPI-F IBM SP1/SP2 Hitachi/MPI Hitachi SGI/MPI SGI PowerChallenge series MPI/DE NEC. INTEL/MPIIntel. Paragon (iCC lib) T.MPI Telmat Multinode Fujitsu/MPI Fujitsu AP1000 EPCC/MPI Cray & EPCC, T3D/T3E.
6
HPCA2001HPCA2001 6 Research MPI MPICHArgonne National Lab. & Mississippi State U. LAMOhio Supercomp. center MPICH/NTMississippi State U. MPI-FMIllinois (Myrinet) MPI-AMUC Berkeley (Myrinet) MPI-PMRWCP, Japan (Myrinet) MPI-CCLCalif. Tech.
7
HPCA2001HPCA2001 7 Research MPI CRI/EPCC MPI Cray Research and Edinburgh (Cray T3D/E)Parallel Computing Centre MPI-APAustralian National U.- (AP1000) CAP Research Program W32MPIIllinois, Concurrent Systems RACE-MPIHughes Aircraft Co. MPI-BIPINRIA, France (Myrinet)
8
HPCA2001HPCA2001 8 Language Binding MPI 1: C, Fortran (for MPICH-based implementation) MPI 2: C, C++, Fortran Java : –Through Java native method interface (JNI): mpiJava JavaMPI –Implement the MPI package by pure Java: MPIJ: (DOGMA project) –JMPI (by MPI Software Technology)
9
HPCA2001HPCA2001 9 Main Features of MPI
10
HPCA2001HPCA2001 10 “Communicator” Identify the process group and context with respect to which the operation is to be performed In a parallel environment, processes need to know each others (“naming”: machine name, IP address, process ID)
11
HPCA2001HPCA2001 11 Process Communicator (2) Four communicators Process in different communicators cannot communicate Process Communicator within Communicator Process Same process can be existed in different communicators Process
12
HPCA2001HPCA2001 12 Point-to-point Communication The basic point-to-point communication operators are send and receive. Communication Modes : –normal mode (blocking and non-blocking), –synchronous mode, –ready mode (to allow access to fast protocols), –buffered mode –….
13
HPCA2001HPCA2001 13 Collective Communication Communication that involves a group of processes. E.g, broadcast, barrier, reduce, scatter, gather, all-to-all,..
14
HPCA2001HPCA2001 14 MPI Programming
15
HPCA2001HPCA2001 15 Writing MPI programs MPI comprises 125 functions Many parallel programs can be written with just 6 basic functions
16
HPCA2001HPCA2001 16 Six basic functions (1) 1. MPI_INIT: Initiate an MPI computation 2. MPI_FINALIZE: Terminate a computation 3. MPI_COMM_SIZE: Determine number of processes in a communicator 4. MPI_COMM_RANK: Determine the identifier of a process in a specific communicator 5. MPI_SEND: Send a message from one process to another process 6. MPI_RECV: Receive a message from one process to another process
17
HPCA2001HPCA2001 17 Program main begin MPI_INIT() MPI_COMM_SIZE(MPI_COMM_WORLD, count) MPI_COMM_RANK(MPI_COMM_WORLD, myid) print(“I am ”, myid, “ of ”, count) MPI_FINALIZE() end A simple program Initiate computation Find the number of processes Find the process ID of current process Each process prints out its output Shut down
18
HPCA2001HPCA2001 18 Result I’m 0 of 4 I’m 2 of 4 I’m 1 of 4 I’m 3 of 4 Process 2Process 3 Process 1 Process 0
19
HPCA2001HPCA2001 19 Another program (2 nodes) ….. MPI_COMM_RANK(MPI_COMM_WORLD, myid) if myid=0 MPI_SEND(“Zero”,…,…,1,…,…) MPI_RECV(words,…,…,1,…,…,…) else MPI_RECV(words,…,…,0,…,…,…) MPI_SEND(“One”,…,…,0,…,…) END IF print(“Received from %s”,words) …… I’m process 0! if myid=0 MPI_SEND(“Zero”,…,…,1,…,…) MPI_RECV(words,…,…,1,…,…,…)…… I’m process 1! else MPI_RECV(words,…,…,0,…,…,…) MPI_SEND(“One”,…,…,0,…,…)
20
HPCA2001HPCA2001 20 Result Received from OneReceived from Zero Process 0 Process 1
21
HPCA2001HPCA2001 21 Collective Communication Three Types of Collective Operations Barrier for process synchronization MPI_BARRIER Data movement moving data among processes no computation MPI_BCAST, MPI_GATHER, MPI_SCATTER Reduction operations Involve computation MPI_REDUCE, MPI_SCAN
22
HPCA2001HPCA2001 22 Barrier MPI_BARRIER Used to synchronize execution of a group of processes wait compute Continue execution All members reach the same point before any can proceed Process 1Process 2Process p Perform barrier Blocking time
23
HPCA2001HPCA2001 23 Data Movement Broadcast: –one member sends the same message to all members Scatter: –one member sends a different message to each member Gather: –every member sends a message to a single member All-to-all broadcast: –every member performs a broadcast All-to-all scatter-gather (Total Exchange): –every member performs a scatter (and gather)
24
HPCA2001HPCA2001 24 MPI Collective Communications Broadcast (MPI_Bcast) Combine-to-one (MPI_Reduce) Scatter (MPI_Scatter) Gather (MPI_Gather) Collect (MPI_Allgather) Combine-to-all (MPI_Allreduce) Reduce: (MPI_Reduce) Scan: (MPI_Scan) All-to-All: (MPI_Alltoall)
25
HPCA2001HPCA2001 25 FACE Process 0Process 1Process 2Process 3 BCAST FACE Data movement (1) MPI_BCAST One single process sends the same data to all other processes, itself included
26
HPCA2001HPCA2001 26 Process 0Process 1Process 2Process 3 GATHER E AC EF F A C FACE Data movement (2) MPI_GATHER All process (include the root process) send the same data to one process and store them in rank order
27
HPCA2001HPCA2001 27 Process 0Process 1Process 2Process 3 SCATTER FACE F C E A Data movement (3) MPI_SCATTER A process sends out a message, which is split into several equals parts, and the i th portion is sent to the i th process
28
HPCA2001HPCA2001 28 Process 0Process 1Process 2Process 3 REDUCE 93789 8937 max Data movement (4) MPI_REDUCE (e.g., find maximum value) combine the values of each process, using a specified operation, and return the combined value to a process
29
HPCA2001HPCA2001 29 MPI_SCAN Scan Op: + Input Result Scan (parallel prefix): “partial” reduction based upon relative process number Process 0Process 3Process 5 ++ + +
30
HPCA2001HPCA2001 30 Example program (1) Calculating the value of by:
31
HPCA2001HPCA2001 31 Example program (2) …… MPI_BCAST(numprocs, …, …, 0, …) for (i = myid + 1; i <= n; i += numprocs) compute the area for each interval accumulate the result in processes’ program data (sum) MPI_REDUCE(&sum, …, …, …, MPI_SUM, 0, …) if (myid == 0) Output result ……
32
HPCA2001HPCA2001 32 Calculated by process 0 Calculated by process 1 Calculated by process 2 Calculated by process 3 OK! =3.141... Start calculation!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.