Introduction to MPI Nischint Rajmohan 5 November 2007.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
MPI and OpenMP.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Implementing Processes and Threads CS550 Operating Systems.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
Message Passing Interface (MPI) by Blaise Barney, Lawrence Livermore National Laboratory.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
PVM and MPI.
Introduction to parallel computing concepts and technics
MPI Basics.
gLite MPI Job Amina KHEDIMI CERIST
Introduction to MPI.
MPI Message Passing Interface
CS 584.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
MPI-Message Passing Interface
Message Passing Models
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Introduction to MPI Nischint Rajmohan 5 November 2007

What can you expect ?  Overview of MPI  Basic MPI commands  How to parallelize and execute a program using MPI & MPICH2 What is outside the scope ?  Technical details of MPI  MPI implementations other than MPICH  Hardware specific optimization techniques

Overview of MPI MPI stands for Message Passing Interface What is Message Passing Interface?  It is not a programming language or compiler specification  It is not a specific implementation or product  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be.  The specifications lets you create libraries that allow you to do problems in parallel using message passing to communicate between processes  It provides binding for widely used programming languages like Fortran, C/C++

Background on MPI Early vendor systems (Intel’s NX, IBM’s EUI, TMC’s CMMD) were not portable (or very capable) Early portable systems (PVM, p4, TCGMSG, Chameleon) were mainly research efforts –Did not address the full spectrum of issues –Lacked vendor support –Were not implemented at the most efficient level The MPI Forum organized in 1992 with broad participation by: –vendors: IBM, Intel, TMC, SGI, Convex, Meiko –portability library writers: PVM, p4 –users: application scientists and library writers –finished in 18 months –Library standard defined by a committee of vendors, implementers, and parallel programmers

Reasons for using a MPI standard Standardization - MPI is the only message passing library which can be considered a standard. It is supported on virtually all HPC platforms. Practically, it has replaced all previous message passing libraries. Portability - There is no need to modify your source code when you port your application to a different platform that supports (and is compliant with) the MPI standard. Performance Opportunities - Vendor implementations should be able to exploit native hardware features to optimize performance. Functionality - Over 115 routines are defined in MPI-1 alone. Availability - A variety of implementations are available, both vendor and public domain.

MPI Operation B1 B2 B4 B3 AA1 A2 A4 A3 B Send Receive Processing Communicator

MPI Programming Model

MPI Library Environment Management Routines Ex: MPI_INIT, MPI_Comm_size, MPI_Comm_rank, MPI_Finalize Point to Point Communication Routines Non-Blocking Routines Ex:MPI_Isend, MPI_Irecv Blocking Routines Ex: MPI_Send, MPI_Recv Collective Communication Routines Ex: MPI_Barrier, MPI_Bcast

Environment Management Routines MPI_Init –Initializes the MPI execution environment. This function must be called in every MPI program, must be called before any other MPI functions and must be called only once in an MPI program. For C programs, MPI_Init may be used to pass the command line arguments to all processes, although this is not required by the standard and is implementation dependent. C: MPI_Init (&argc,&argv) Fortran: MPI_INIT (ierr)

Environment Management Routines contd. MPI_Comm_Rank –Determines the rank of the calling process within the communicator. Initially, each process will be assigned a unique integer rank between 0 and number of processors - 1 within the communicator MPI_COMM_WORLD. This rank is often referred to as a task ID. If a process becomes associated with other communicators, it will have a unique rank within each of these as well. C: MPI_Comm_rank (comm,&rank) FORTRAN: MPI_COMM_RANK (comm,rank,ierr)

Environment Management Routines contd. MPI_Comm_size –Determines the number of processes in the group associated with a communicator. Generally used within the communicator MPI_COMM_WORLD to determine the number of processes being used by your application. C: MPI_Comm_size (comm,&size) Fortran: MPI_COMM_SIZE (comm,size,ierr) MPI_Finalize –Terminates the MPI execution environment. This function should be the last MPI routine called in every MPI program - no other MPI routines may be called after it. C: MPI_Finalize () Fortran: MPI_FINALIZE (ierr)

MPI Sample Program: Environment Management Routines ! In C ! /* the mpi include file */ #include "mpi.h" #include int main( argc, argv ) int argc; char *argv[]; { int rank, size; !/* Initialize MPI */ MPI_Init( &argc, &argv ); !/* How many processors are there?*/ MPI_Comm_size( MPI_COMM_WORLD, &size ); !/* What processor am I (what is my rank)? */ MPI_Comm_rank( MPI_COMM_WORLD, &rank ); printf( "I am %d of %d\n", rank, size ); MPI_Finalize(); return 0; } ! In Fortran program main ! /* the mpi include file */ include 'mpif.h' integer ierr, rank, size !/* Initialize MPI */ call MPI_INIT( ierr ) !/* How many processors are there?*/ call MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr ) !/* What processor am I (what is my rank)? */ call MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr ) print *, 'I am ', rank, ' of ', size call MPI_FINALIZE( ierr ) end

Point to Point Communication Routines MPI_Send Basic blocking send operation. Routine returns only after the application buffer in the sending task is free for reuse. Note that this routine may be implemented differently on different systems. The MPI standard permits the use of a system buffer but does not require it. C: MPI_Send (&buf,count,datatype,dest,tag,comm) Fortran: MPI_SEND(buf,count,datatype,dest,tag,comm,ierr)

Point to Point Communication Routines contd. MPI_Recv Receive a message and block until the requested data is available in the application buffer in the receiving task. C: MPI_Recv(&buf,count,datatype,source,tag,comm) Fortran: MPI_RECV(buf,count,datatype,source,tag,comm,ierr)

MPI Sample Program: Send and Receive ! In C #include #include "mpi.h" main(int argc, char** argv){ int my_PE_num, numbertoreceive, numbertosend=42; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_PE_num); if (my_PE_num==0){ MPI_Recv( &numbertoreceive, 1, MPI_INT, MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, &status); printf("Number received is: %d\n", numbertoreceive); } else MPI_Send( &numbertosend, 1, MPI_INT, 0, 10, MPI_COMM_WORLD); MPI_Finalize(); } ! In Fortran program shifter implicit none include 'mpif.h' integer my_pe_num, errcode, numbertoreceive, numbertosend integer status(MPI_STATUS_SIZE) call MPI_INIT(errcode) call MPI_COMM_RANK(MPI_COMM_WORLD, my_pe_num, errcode) numbertosend = 42 if (my_PE_num.EQ.0) then call MPI_Recv( numbertoreceive, 1, MPI_INTEGER,MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD, status, errcode) print *, 'Number received is:‘,numbertoreceive Endif if (my_PE_num.EQ.1) then call MPI_Send( numbertosend, 1,MPI_INTEGER, 0, 10, MPI_COMM_WORLD,errcode) endif call MPI_FINALIZE(errcode) end

Collective Communication Routines MPI_Barrier –Creates a barrier synchronization in a group. Each task, when reaching the MPI_Barrier call, blocks until all tasks in the group reach the same MPI_Barrier call. C: MPI_Barrier (comm) Fortran: MPI_BARRIER (comm,ierr) MPI_Bcast –Broadcasts (sends) a message from the process with rank "root" to all other processes in the group. C: MPI_Bcast (&buffer,count,datatype,root,comm) Fortran: MPI_BCAST (buffer,count,datatype,root,comm,ierr)

Send a large message from process 0 to process 1 –If there is insufficient storage at the destination, the send must wait for the user to provide the memory space (through a receive) What happens with this code? Sources of Deadlocks Process 0 Send(1) Recv(1) Process 1 Send(0) Recv(0) This is called “unsafe” because it depends on the availability of system buffers

MPICH – MPI Implementation MPICH is a freely available, portable implementation of MPI MPICH acts as the middleware between the MPI parallel library API and the hardware environment MPICH build is available for Unix based systems and also as an installer for Windows. MPICH2 is latest version of the implementation unix.mcs.anl.gov/mpi/mpich/

MPI Program Compilation (Unix) Fortran mpif90 –c hello_world.f mpif90 –o hello_world hello_world.o C mpicc -c hello_world.cc mpicc –o hello_world hello_world.o

MPI Program Execution Fortran/C mpiexec –n 4./hello_world mpiexec is the command for execution in parallel environment used for specifying number of processors mpiexec -help This command should give all the options available to run mpi programs If you don’t have mpiexec installed on your system, use mpirun and use –np instead of -n

MPI Program Execution contd. mpiexec –machinefile hosts –n 7./hello_world This flag allows you to specify a file containing the host name of the processors you want to use master node2 node3 node5 node6 Sample hosts file :

MPICH on Windows Installing MPICH2 1.Download the Win32-IA32 version of MPICH2 from: 2.Run the executable, mpich win32-ia32.msi (or a more recent version). Most likely it will result in the following error: 3. To download version1.1 use this link: 25E3-F D1E7CF3A3&displaylang=en 25E3-F D1E7CF3A3&displaylang=en

MPICH on Windows contd. 4.Install the.NET Framework program 5.Install the MPICH2 executable. Write down the passphrase for future reference. The passphrase must be consistent across a network. 6.Add the MPICH2 path to Windows: 1.Right click “My Computer” and pick properties 2.Select the Advanced Tab 3.Select the Environment Variables button 4.Highlight the path variable under System Variables and click edit. Add “C:\MPICH2\bin” to the end of the list, make sure to separate this from the prior path with a semicolon. 7.Run the example executable to insure correct installation. 8.mpiexec –n 2 cpi.exe 9.If installed on a dual processor machine, verify that both processors are being utilized by examining “CPU Usage History” in the Windows Task Manager. 10.The first time each session mpiexec is run it will ask for username and password. To prevent being asked for this in the future, this information can be encrypted into the Windows registry by running: 11.mpiexec –register 12.The username and password are your Windows XP logon information.

MPICH on Windows contd. Compilation (Fortran) ifort /fpp /include:”C:MPICH2/INCLUDE” /names/uppercase /iface:cref /libs:static /threads /c hello_world.f – The above command will compile the parallel program and create a.obj file. ifort –o hello_world.exe hello_world.obj C:/MPICH2/LIB/cxx.lib C:/MPICH2/LIB/mpi.lib C:/MPICH2/LIB/fmpich2.lib C:/MPICH2/LIB/fmpich2s.lib C:/MPICH2/LIB/fmpich2g.lib –The above command will link the object file and create the executable. The executable is run in the same way as specified before using mpiexec command

THE END Useful Sources: CS High Performance Computing & Architecture For more assistance, you can contact Nischint Rajmohan MK