1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.

Slides:



Advertisements
Similar presentations
Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
Advertisements

MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
Collective Communication.  Collective communication is defined as communication that involves a group of processes  More restrictive than point to point.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
Jonathan Carroll-Nellenback CIRC Summer School MESSAGE PASSING INTERFACE (MPI)
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Sahalu JunaiduICS 573: High Performance Computing6.1 Programming Using the Message Passing Paradigm Principles of Message-Passing Programming The Building.
Parallel Programming with Java
CS 179: GPU Programming Lecture 20: Cross-system communication.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
Parallel Programming with MPI Matthew Pratola
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
Parallel Programming and Algorithms – MPI Collective Operations David Monismith CS599 Feb. 10, 2015 Based upon MPI: A Message-Passing Interface Standard.
1 Collective Communications. 2 Overview  All processes in a group participate in communication, by calling the same function with matching arguments.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
Lecture 6: Message Passing Interface (MPI). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
MPI Jakub Yaghob. Literature and references Books Gropp W., Lusk E., Skjellum A.: Using MPI: Portable Parallel Programming with the Message-Passing Interface,
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
-1.1- MPI Lectured by: Nguyễn Đức Thái Prepared by: Thoại Nam.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
Distributed Processing with MPI International Summer School 2015 Tomsk Polytechnic University Assistant Professor Dr. Sergey Axyonov.
Computer Science Department
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
MPI Jakub Yaghob.
Lecture 2: Part II Message Passing Programming: MPI
CS4402 – Parallel Computing
Introduction to MPI.
MPI Message Passing Interface
Computer Science Department
Send and Receive.
Collective Communication with MPI
CS 584.
An Introduction to Parallel Programming with MPI
Send and Receive.
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Distributed Systems CS
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
High Performance Parallel Programming
A Message Passing Standard for MPP and Workstations
MPI: Message Passing Interface
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Message-Passing Computing Message Passing Interface (MPI)
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Computer Science Department
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

1 MPI: Message Passing Interface Prabhaker Mateti Wright State University

Mateti, MPI 2 Overview MPI Hello World! Introduction to programming with MPI MPI library calls

Mateti, MPI 3 MPI Overview Similar to PVM Network of Heterogeneous Machines Multiple implementations –Open source: MPICH LAM –Vendor specific

Mateti, MPI 4 MPI Features Rigorously specified standard Portable source code Enables third party libraries Derived data types to minimize overhead Process topologies for efficiency on MPP Van fully overlap communication Extensive group communication

Mateti, MPI 5 MPI 2 Dynamic Process Management One-Sided Communication Extended Collective Operations External Interfaces Parallel I/O Language Bindings (C++ and Fortran-90)

Mateti, MPI 6 MPI Overview 125+ functions typical applications need only about 6

Mateti, MPI 7 MPI: manager+workers #include main(int argc, char *argv[]) { int myrank; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WOR LD, &myrank); if (myrank == 0) manager(); else worker(); MPI_Finalize(); } MPI_Init initializes the MPI system MPI_Finalize called last by all processes MPI_Comm_rank identifies a process by its rank MPI_COMM_WORLD is the group that this process belongs to

Mateti, MPI 8 MPI: manager() manager() { MPI_Status status; MPI_Comm_size( MPI_COMM_WORLD, &ntasks); for (i = 1;i < ntasks;++i){ work= nextWork(); MPI_Send(&work, 1, MPI_INT,i,WORKTAG, MPI_COMM_WORLD); } … MPI_Reduce(&sub, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); } MPI_Comm_size MPI_Send

Mateti, MPI 9 MPI: worker() worker() { MPI_Statusstatus; for (;;) { MPI_Recv(&work, 1, MPI_INT, 0, MPI_ANY_TAG, MPI_COMM_WORLD, &status); result = doWork(); MPI_Send(&result, 1, MPI_DOUBLE, 0, 0, MPI_COMM_WORLD); } MPI_Recv

Mateti, MPI 10 MPI computes  #include "mpi.h" int main(int argc, char *argv[]) { MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&np); MPI_Comm_rank(MPI_COMM_WORLD,&myid) ; n =...; /* intervals */ MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); sub = series_sum(n, np); MPI_Reduce(&sub, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf("pi is %.16f\n", pi); MPI_Finalize(); return 0; }

Mateti, MPI 11 Process groups Group membership is static. There are no race conditions caused by processes independently entering and leaving a group. New group formation is collective and group membership information is distributed, not centralized.

Mateti, MPI 12 MPI_Sendblocking send MPI_Send( &sendbuffer, /* message buffer */ n, /* n items of */ MPI_type,/* data type in message */ destination, /* process rank */ WORKTAG,/* user chosen tag */ MPI_COMM/* group */ );

Mateti, MPI 13 MPI_Recvblocking receive MPI_Recv( &recvbuffer, /* message buffer */ n, /* n data items */ MPI_type, /* of type */ MPI_ANY_SOURCE, /* from any sender */ MPI_ANY_TAG, /* any type of message */ MPI_COMM, /* group */ &status );

Mateti, MPI 14 Send-receive succeeds … Sender’s destination is a valid process rank Receiver specified a valid source process Communicator is the same for both Tags match Message data types match Receiver’s buffer is large enough

Mateti, MPI 15 Message Order P sends messages m1 first then m2 to Q Q will receive m1 before m2 P sends m1 to Q, then m2 to R In terms of a global wall clock, conclude nothing re R receiving m2 before/after Q receiving m1.

Mateti, MPI 16 Blocking and Non-blocking Send, receive can be blocking or not A blocking send can be coupled with a non- blocking receive, and vice-versa Non-blocking send can use –Standard modeMPI_Isend –Synchronous modeMPI_Issend –Buffered modeMPI_Ibsend –Ready modeMPI_Irsend

Mateti, MPI 17 MPI_Isendnon-blocking MPI_Isend( &buffer,/* message buffer */ n, /* n items of */ MPI_type,/* data type in message */ destination, /* process rank */ WORKTAG,/* user chosen tag */ MPI_COMM,/* group */ &handle );

Mateti, MPI 18 MPI_Irecv MPI_Irecv( &result, /* message buffer */ n, /* n data items */ MPI_type, /* of type */ MPI_ANY_SOURCE, /* from any sender */ MPI_ANY_TAG, /* any type of message */ MPI_COMM_WORLD, /* group */ &handle );

Mateti, MPI 19 MPI_Wait MPI_Wait( handle, &status );

Mateti, MPI 20 MPI_Wait, MPI_Test MPI_Wait( handle, &status ); MPI_Test( handle, &status );

Mateti, MPI 21 Collective Communication

Mateti, MPI 22 MPI_Bcast MPI_Bcast( buffer, count, MPI_Datatype, root, MPI_Comm ); All processes use the same count, data type, root, and communicator. Before the operation, the root’s buffer contains a message. After the operation, all buffers contain the message from the root

Mateti, MPI 23 MPI_Scatter MPI_Scatter( sendbuffer, sendcount, MPI_Datatype, recvbuffer, recvcount, MPI_Datatype, root, MPI_Comm); All processes use the same send and receive counts, data types, root and communicator. Before the operation, the root’s send buffer contains a message of length sendcount * N', where N is the number of processes. After the operation, the message is divided equally and dispersed to all processes (including the root) following rank order.

Mateti, MPI 24 MPI_Gather MPI_Gather( sendbuffer, sendcount, MPI_Datatype, recvbuffer, recvcount, MPI_Datatype, root, MPI_Comm); This is the “reverse” of MPI_Scatter(). After the operation the root process has in its receive buffer the concatenation of the send buffers of all processes (including its own), with a total message length of recvcount * N, where N is the number of processes. The message is gathered following rank order.

Mateti, MPI 25 MPI_Reduce MPI_Reduce( sndbuf, rcvbuf, count, MPI_Datatype datatype, MPI_Op, root, MPI_Comm); After the operation, the root process has in its receive buffer the result of the pair-wise reduction of the send buffers of all processes, including its own.

Mateti, MPI 26 Predefined Reduction Ops MPI_MAX MPI_MIN MPI_SUM MPI_PROD MPI_LAND MPI_BAND MPI_LOR MPI_BOR MPI_LXOR MPI_BXOR MPI_MAXLOC MPI_MINLOC Llogical Bbit-wise

Mateti, MPI 27 User Defined Reduction Ops void myOperator ( void * invector, void * inoutvector, int * length, MPI_Datatype * datatype) { … }

Mateti, MPI 28 Ten Reasons to Prefer MPI over PVM 1. MPI has more than one free, and quality implementations. 2. MPI can efficiently program MPP and clusters. 3. MPI is rigorously specified. 4. MPI efficiently manages message buffers. 5. MPI has full asynchronous communication. 6. MPI groups are solid, efficient, and deterministic. 7. MPI defines a 3rd party profiling mechanism. 8. MPI synchronization protects 3rd party software. 9. MPI is portable. 10. MPI is a standard.

Mateti, MPI 29 Summary Introduction to MPI Reinforced Manager-Workers paradigm Send, receive: blocked, non-blocked Process groups

Mateti, MPI 30 MPI resources Open source implementations –MPICH –LAM Books –Using MPI William Gropp, Ewing Lusk, Anthony SkjellumUsing MPI –Using MPI-2 William Gropp, Ewing Lusk, Rajeev ThakurUsing MPI-2 On-line tutorials –