MPI Message Passing Interface Portable Parallel Programs.

Slides:



Advertisements
Similar presentations
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory.
Advertisements

MPI Message Passing Interface
1 Non-Blocking Communications. 2 #include int main(int argc, char **argv) { int my_rank, ncpus; int left_neighbor, right_neighbor; int data_received=-1;
Practical techniques & Examples
Its.unc.edu 1 Collective Communication University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
What is a pointer? First of all, it is a variable, just like other variables you studied So it has type, storage etc. Difference: it can only store the.
MPI Collective Communications
CS 140: Models of parallel programming: Distributed memory and MPI.
Reference: / MPI Program Structure.
Reference: Getting Started with MPI.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
E.Papandrea 06/11/2003 DFCI COMPUTING - HW REQUIREMENTS1 Enzo Papandrea COMPUTING HW REQUIREMENT.
Message Passing Interface In Java for AgentTeamwork (MPJ) By Zhiji Huang Advisor: Professor Munehiro Fukuda 2005.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Parallel Programming with Java
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
12.1 Parallel Programming Types of Parallel Computers Two principal types: 1.Single computer containing multiple processors - main memory is shared,
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
MPI Adriano Cruz ©2003 NCE/UFRJ e Adriano Cruz NCE e IM - UFRJ Summary n References n Introduction n Point-to-point communication n Collective.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Interface Using resources from
1 ParallelAlgorithms Parallel Algorithms Dr. Stephen Tse Lesson 9.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming: 1. Collective Operations 2. Overlapping Communication with Computation Dr. Xiao.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
Introduction to MPI Programming Ganesh C.N.
Chapter 4.
Introduction to parallel computing concepts and technics
Introduction to MPI.
MPI Message Passing Interface
CS 584.
More on MPI Nonblocking point-to-point routines Deadlock
Lecture 14: Inter-process Communication
Introduction to parallelism and the Message Passing Interface
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Parallel Processing - MPI
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

MPI Message Passing Interface Portable Parallel Programs

Message Passing Interface Derived from several previous libraries –PVM, P4, Express Standard message-passing library –includes best of several previous libraries Versions for C/C++ and FORTRAN Available for free Can be installed on –Networks of Workstations –Parallel Computers (Cray T3E, IBM SP2, Parsytec PowerXplorer, other)

MPI Services Hide details of architecture Hide details of message passing, buffering Provides message management services –packaging –send, receive –broadcast, reduce, scatter, gather message modes

MPI Program Organization MIMD Multiple Instruction, Multiple Data –Every processor runs a different program SPMD Single Program, Multiple Data –Every processor runs the same program –Each processor computes with different data –Variation of computation on different processors through if or switch statements

MPI Progam Organization MIMD in a SPMD framework –Different processors can follow different computation paths –Branch on if or switch based on processor identity

MPI Basics Starting and Finishing Identifying yourself Sending and Receiving messages

MPI starting and finishing Statement needed in every program before any other MPI code MPI_Init(&argc, &argv); Last statement of MPI code must be MPI_Finalize(); Program will not terminate without this statement

MPI Messages Message content, a sequence of bytes Message needs wrapper –analogous to an envelope for a letter LetterMessage AddressDestination Return AddressSource Type of Mailing (class)Message type Letter WeightSize (count) CountryCommunicator MagazineBroadcast

MPI Basics Communicator –Collection of processes –Determines scope to which messages are relative –identity of process (rank) is relative to communicator –scope of global communications (broadcast, etc.)

MPI Message Protocol, Send message contentsblock of memory countnumber of items in message message typetype of each item destinationrank of processor to receive taginteger designator for message communicatorthe communicator within which the message is sent

MPI Message Protocol, Receive message contentsbuffer in memory to store received message countsize of buffer message typetype of each item sourcerank of processor sending taginteger designator for message communicatorthe communicator within which the message is sent statusinformation about message received

Message Passing Example #include #include "mpi.h" /* includes MPI library code specs */ #define MAXSIZE 100 int main(int argc, char* argv[]) { int myRank; /* rank (identity) of process */ int numProc; /* number of processors */ int source; /* rank of sender */ int dest; /* rank of destination */ int tag = 0; /* tag to distinguish messages */ char mess[MAXSIZE]; /* message (other types possible) */ int count; /* number of items in message */ MPI_Status status; /* status of message received */

Message Passing Example MPI_Init(&argc, &argv); /* start MPI */ /* get number of processes */ MPI_Comm_size(MPI_COMM_WORLD, &numProc); /* get rank of this process */ MPI_Comm_rank(MPI_COMM_WORLD, &myRank); /***********************************************/ /* code to send, receive and process messages */ /***********************************************/ MPI_Finalize(); /* shut down MPI */ }

Message Passing Example if (myRank != 0){/* all processes send to root */ /* create message */ sprintf(message, "Hello from %d", myRank); dest = 0; /* destination is root */ count = strlen(mess) + 1; /* include '\0' in message */ MPI_Send(mess, count, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } else{/* root (0) process receives and prints messages */ /* from each processor in rank order */ for(source = 1; source < numProc; source++){ MPI_Recv(mess, MAXSIZE, MPI_CHAR, source, tag, MPICOMM_WORLD, &status); printf("%s\n", mess); }

MPI message protocol Buffer in MPI_Recv must contain enough space for message. Buffer in MPI_Send need not all be sent Count (second parameter) in MPI_Send determines number of items of given type which are sent (type given by third parameter) Count (second parameter) in MPI_Recv specifies capacity of buffer (number of items) in terms of type given in third parameter

MPI message protocol Send - Receive is point-to-point, destination process is specified by fourth parameter (dest) in MPI_Send Messages can be tagged by integer to distinguish messages with different purposes by the fifth argument in MPI_Send and MPI_Recv MPI_Recv can specify a specific source from which to receive (fourth parameter) MPI_Recv can receive from any source or with any tag using MPI_ANY_SOURCE and MPI_ANY_TAG

MPI message protocol Communicator, sixth parameter in MPI_Send and MPI_Recv, determines context for destination and source ranks MPI_COMM_WORLD is automatically supplied communicator which includes all processes created at start-up Other communicators can be defined by user to group processes and to create virtual topologies

MPI message protocol Status of message received by MPI_Recv is returned in the seventh (status) parameter Number of items actually received can be determined from status by using function MPI_Get_count The following call inserted into the previous code would return the number of characters sent in the integer variable cnt MPI_Get_count(&status, MPI_CHAR, &cnt);

Broadcasting a message Broadcast: one sender, many receivers Includes all processes in communicator, all processes must make an equivalent call to MPI_Bcast Any processor may be sender (root), as determined by the fourth parameter First three parameters specify message as for MPI_Send and MPI_Recv, fifth parameter specifies communicator Broadcast serves as a global synchronization

MPI_Bcast() Syntax MPI_Bcast(mess, count, MPI_INT, root, MPI_COMM_WORLD); mess pointer to message buffer count number of items sent MPI_INT type of item sent Note: count and type should be the same on all processors root sending processor MPI_COMM_WORLD communicator within which broadcast takes place

Timing Programs MPI_Wtime() –returns a double giving time in seconds from a fixed time in the past –To time a program, record MPI_Wtime() in a variable at start, then again at finish, difference is elapsed time startime = MPI_Wtime(); /* part of program to be timesd */ stoptime = MPI_Wtime(); time = stoptime - starttime;