Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Introduction to Message Passing Interface (MPI) Part I SoCal Annual AAP workshop.
Reference: / MPI Program Structure.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
1 Message Passing Programming (MPI). 2 What is MPI? A message-passing library specification extended message-passing model not a language or compiler.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Using MPI - the Fundamentals University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
CS 179: GPU Programming Lecture 20: Cross-system communication.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
CS 484. Message Passing Based on multi-processor Set of independent processors Connected via some communication net All communication between processes.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
An Introduction to MPI Parallel Programming with the Message Passing Interface Prof S. Ramachandram.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed project is to research and develop an effective prediction.
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
Outline Background Basics of MPI message passing
Introduction to parallel computing concepts and technics
Introduction to MPI.
MPI Message Passing Interface
CS 584.
MPI: The Message-Passing Interface
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
Message Passing Models
Lecture 14: Inter-process Communication
Introduction to parallelism and the Message Passing Interface
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School of Mines)

2 MPI 1 Lecture Overview Advantages of Message Passing Background on MPI “Hello, world” in MPI Key Concepts in MPI Basic Communications in MPI Simple send and receive program

3 Advantages of Message Passing Universality : MP model fits well on separate processors connected by fast/slow network. Matches the hardware of most of today’s parallel supercomputers, clusters etc. with few 100 procs/cores to possibly1000,000 procs/cores Expressivity : MP has been found to be a useful and complete model in which to express parallel algorithms. It provides the control missing from data parallel or compiler based models Ease of debugging : Debugging of parallel programs remain a challenging research area. Debugging is easier for MPI paradigm than shared memory paradigm (even if it is hard to believe)

4 Advantages of Message Passing Performance: –This is the most compelling reason why MP will remain a permanent part of parallel computing environment –As modern CPUs become faster, management of their caches and the memory hierarchy is the key to getting most out of them –MP allows a way for the programmer to explicitly associate specific data with processes and allows the compiler and cache management hardware to function fully –Memory bound applications can exhibit superlinear speedup when run on multiple PEs compare to single PE of MP machines

5 Background on MPI MPI - Message Passing Library –Library standard defined by committee of vendors, implementers, and parallel programmers (1993/1994) –Used to create parallel SPMD programs based on message passing Available on almost all parallel machines in Fortran, C and C++ About 125 routines including advanced routines 6 basic routines

6 MPI Implementations Most parallel machine vendors have optimized versions Lots of info on the web e.g.: – Many books e.g.: Using MPI : Portable Parallel Programming with the Message-Passing Interface, 2 nd edition William Gropp, Ewing Lusk, Anghony Skjellum The MIT Press

7 Key Concepts of MPI Used to create parallel SPMD programs based on message passing Normally the same program is running on several different nodes Nodes communicate using message passing

8 Include files The MPI include file –C and C++: mpi.h –Fortran: mpif.h Defines many constants used within MPI programs In C defines the interfaces for the functions Compilers know where to find the include files

9 Communicators –A parameter for most MPI calls –A collection of processors working on some part of a parallel job –MPI_COMM_WORLD is defined in the MPI include file as all of the processors in your job –Can create subsets of MPI_COMM_WORLD –Processors within a communicator are assigned numbers 0 to n-1

10 Data types –When sending a message, it is given a data type –Predefined types correspond to "normal" types MPI_REAL, MPI_FLOAT - Fortran real and C float respectively MPI_DOUBLE_PRECISION, MPI_DOUBLE - Fortan double precision and C double repectively MPI_INTEGER and MPI_INT - Fortran and C integer respectively –User can also create user-defined types

11 Minimal MPI program Every MPI program needs these… –C version #include /* the mpi include file */ /* Initialize MPI */ ierr=MPI_Init(&argc, &argv); /* How many total PEs are there */ ierr=MPI_Comm_size(MPI_COMM_WORLD, &nPEs); /* What node am I (what is my rank?) */ ierr=MPI_Comm_rank(MPI_COMM_WORLD, &iam);... ierr=MPI_Finalize(); In C MPI routines are functions and return an error value

12 Minimal MPI program Every MPI program needs these… –Fortran version include 'mpif.h' ! MPI include file c Initialize MPI call MPI_Init(ierr) c Find total number of PEs call MPI_Comm_size(MPI_COMM_WORLD, nPEs, ierr) c Find the rank of this node call MPI_Comm_rank(MPI_COMM_WORLD, iam, ierr)... call MPI_Finalize(ierr) In Fortran, MPI routines are subroutines, and last parameter is an error value

13 MPI “Hello, World” A parallel hello world program –Initialize MPI –Have each node print out its node number –Quit MPI

14 #include #include #include int main(argc, argv) int argc; char *argv[ ]; { int myid, numprocs; MPI_Init (&argc, &argv) ; MPI_Comm_size(MPI_COMM_WORLD, &numprocs) ; MPI_Comm_rank(MPI_COMM_WORLD, &myid) ; printf(“Hello from %d\n”, myid) ; printf(“Numprocs is %d\n”, numprocs) ; MPI_Finalize(): } C/MPI version of “Hello, World”

15 program hello include ‘mpif.h’ integer myid, ierr, numprocs call MPI_Init( ierr) call MPI_Comm_rank( MPI_COMM_WORLD, myid, ierr) call MPI_Comm_Size(MPI_COMM_WORLD, numprocs,ierr) write (*,*) “Hello from”, myid write (*,*) “Numprocs is”, numprocs call MPI_Finalize(ierr) stop end Fortran/MPI version of “Hello, World”

16 Basic Communications in MPI Point to point communications Data values are transferred from one processor to another –One process sends the data –Another receives the data Synchronous –Call does not return until the message is sent or received Asynchronous –Call indicates a start of send or receive operation, and another call is made to determine if call has finished

17 Synchronous Send MPI_Send: Sends data to another processor Use MPI_Receive to "get" the data C –MPI_Send(&buffer,count,datatype, destination,tag,communicator); Fortran –Call MPI_Send(buffer, count, datatype,destination, tag, communicator, ierr) Call blocks until message on the way

18 MPI_Send Buffer: The data Count : Length of source array (in elements, 1 for scalars) Datatype : Type of data, for example MPI_DOUBLE_PRECISION (fortran), MPI_INT ( C ), etc Destination : Processor number of destination processor in communicator Tag : Message type (arbitrary integer) Communicator : Your set of processors Ierr : Error return (Fortran only)

19 Synchronous Receive Call blocks until message is in buffer C –MPI_Recv(&buffer,count, datatype, source, tag, communicator, &status); Fortran –Call MPI_ Recv(buffer, count, datatype, source,tag,communicator, status, ierr) Status - contains information about incoming message –C MPI_Status status; –Fortran Integer status(MPI_STATUS_SIZE)

20 Status In C –status is a structure of type MPI_Status which contains three fields MPI_SOURCE, MPI_TAG, and MPI_ERROR –status.MPI_SOURCE, status.MPI_TAG, and status.MPI_ERROR contain the source, tag, and error code respectively of the received message In Fortran –status is an array of INTEGERS of length MPI_STATUS_SIZE, and the 3 constants MPI_SOURCE, MPI_TAG, MPI_ERROR are the indices of the entries that store the source, tag, & error –status(MPI_SOURCE), status(MPI_TAG), status(MPI_ERROR) contain respectively the source, the tag, and the error code of the received message.

21 MPI_Recv Buffer: The data Count : Length of source array (in elements, 1 for scalars) Datatype : Type of data, for example : MPI_DOUBLE_PRECISION, MPI_INT, etc Source : Processor number of source processor in communicator Tag : Message type (arbitrary integer) Communicator : Your set of processors Status: Information about message Ierr : Error return (Fortran only)

22 Basic MPI Send and Receive A parallel program to send & receive data –Initialize MPI –Have processor 0 send an integer to processor 1 –Have processor 1 receive an integer from processor 0 –Both processors print the data –Quit MPI

23 Simple Send & Receive Program #include #include "mpi.h" #include /************************************************************ This is a simple send/receive program in MPI ************************************************************/ int main(argc,argv) int argc; char *argv[]; { int myid, numprocs; int tag,source,destination,count; int buffer; MPI_Status status; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid);

24 Simple Send & Receive Program (cont.) tag=1234; source=0; destination=1; count=1; if(myid == source) { buffer=5678; MPI_Send(&buffer,count,MPI_INT,destination,tag,MPI_COMM_WORLD); printf("processor %d sent %d\n",myid,buffer); } if(myid == destination) { MPI_Recv(&buffer,count,MPI_INT,source,tag,MPI_COMM_WORLD,&status); printf("processor %d got %d\n",myid,buffer); } MPI_Finalize(); }

25 Simple Send & Receive Program program send_recv include "mpif.h” ! This is MPI send - recv program integer myid, ierr,numprocs integer tag,source,destination,count integer buffer integer status(MPI_STATUS_SIZE) call MPI_Init( ierr ) call MPI_Comm_rank( MPI_COMM_WORLD, myid, ierr ) call MPI_Comm_size( MPI_COMM_WORLD, numprocs, ierr ) tag=1234 source=0 destination=1 count=1

26 Simple Send & Receive Program (cont.) if(myid.eq. source)then buffer=5678 Call MPI_Send(buffer, count, MPI_INTEGER,destination,& tag, MPI_COMM_WORLD, ierr) write(*,*)"processor ",myid," sent ",buffer endif if(myid.eq. destination)then Call MPI_Recv(buffer, count, MPI_INTEGER,source,& tag, MPI_COMM_WORLD, status,ierr) write(*,*)"processor ",myid," got ",buffer endif call MPI_Finalize(ierr) stop end

27 The 6 Basic C MPI Calls MPI is used to create parallel programs based on message passing Usually the same program is run on multiple processors The 6 basic calls in C MPI are: –MPI_Init( &argc, &argv) –MPI_Comm_rank( MPI_COMM_WORLD, &myid ) –MPI_Comm_Size( MPI_COMM_WORLD, &numprocs) –MPI_Send(&buffer, count,MPI_INT,destination, tag, MPI_COMM_WORLD) –MPI_Recv(&buffer, count, MPI_INT,source,tag, MPI_COMM_WORLD, &status) –call MPI_Finalize()

28 The 6 Basic Fortran MPI Calls MPI is used to create parallel programs based on message passing Usually the same program is run on multiple processors The 6 basic calls in Fortran MPI are: –MPI_Init( ierr ) –MPI_Comm_rank( MPI_COMM_WORLD, myid, ierr ) –MPI_Comm_Size( MPI_COMM_WORLD, numprocs, ierr ) –MPI_Send(buffer, count,MPI_INTEGER,destination, tag, MPI_COMM_WORLD, ierr) –MPI_Recv(buffer, count, MPI_INTEGER,source,tag, MPI_COMM_WORLD, status,ierr) –call MPI_Finalize(ierr)