Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Parallel Programming in C with MPI and OpenMP
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 MCSR Unix Camp.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Parallel Programming with MPI By, Santosh K Jena..
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Implementing Processes and Threads CS550 Operating Systems.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
PVM and MPI.
MPI Basics.
Introduction to MPI.
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
Introduction to MPI CDP.
CS 584.
MPI: The Message-Passing Interface
Introduction to Message Passing Interface (MPI)
CS 5334/4390 Spring 2017 Rogelio Long
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Distributed Memory Programming with Message-Passing
Parallel Processing - MPI
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Chapter 3

MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies what such a library should be –Specifies application programming interface (API) for such libraries –Many libraries implement such APIs on different platforms – MPI libraries Goal: provide a standard for writing message passing programs –Portable, efficient, flexible Language binding: C, C++, FORTRAN programs

The Program #include #include "mpi.h" main(int argc, char* argv[]) { int my_rank; /* rank of process */ int p; /* number of processes */ int source; /* rank of sender */ int dest; /* rank of receiver */ int tag = 0; /* tag for messages */ char message[100]; /* storage for message */ MPI_Status status; /* return status for */ /* receive */ /* Start up MPI */ MPI_Init(&argc, &argv); /* Find out process rank */ MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);

The Program /* Find out number of processes */ MPI_Comm_size(MPI_COMM_WORLD, &p); if (my_rank != 0) { /* Create message */ sprintf(message, "Greetings from process %d!", my_rank); dest = 0; /* Use strlen+1 so that '\0' gets transmitted */ MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } else { /* my_rank == 0 */ for (source = 1; source < p; source++) { MPI_Recv(message, 100, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status); printf("%s\n", message); } /* Shut down MPI */ MPI_Finalize(); } /* main */

General MPI programs #include main( int argc, char** argv ) { MPI_Init( &argc, &argv ); /* main part of the program */ /* Use MPI function call depend on your data partitioning and the parallelization architecture */ MPI_Finalize(); }

MPI Basics MPI’s pre-defined constants, function prototypes, etc., are included in a header file. This file must be included in your code wherever MPI function calls appear (in “main” and in user subroutines/functions) : –#include “mpi.h” for C codes –#include “mpi++.h” * for C++ codes –include “mpif.h” for f77 and f9x codes MPI_Init must be the first MPI function called Terminates MPI by calling MPI_Finalize These two functions must only be called once in user code.

MPI Basics MPI’s pre-defined constants, function prototypes, etc., are included in a header file. This file must be included in your code wherever MPI function calls appear (in “main” and in user subroutines/functions) : –#include “mpi.h” for C codes –#include “mpi++.h” * for C++ codes –include “mpif.h” for f77 and f9x codes MPI_Init must be the first MPI function called Terminates MPI by calling MPI_Finalize These two functions must only be called once in user code.

MPI Basics C is case-sensitive language. MPI function names always begin with “MPI_”, followed by specific name with leading character capitalized, e.g., MPI_Comm_rank. MPI pre-defined constant variables are expressed in upper case characters, e.g., MPI_COMM_WORLD.

Basic MPI Datatypes MPI datatypeC datatype MPI_CHARsigned char MPI_SIGNED_CHARsigned char MPI_UNSIGNED_CHAR unsigned char MPI_SHORTsigned short MPI_UNSIGNED_SHORT unsigned short MPI_INT signed int MPI_UNSIGNED unsigned int MPI_LONGsigned long MPI_UNSIGNED_LONG unsigned long MPI_FLOATfloat MPI_DOUBLEdouble MPI_LONG_DOUBLElong double

MPI is Simple Many parallel programs can be written using just these six functions, only two of which are non-trivial: –MPI_INIT –MPI_FINALIZE –MPI_COMM_SIZE –MPI_COMM_RANK –MPI_SEND –MPI_RECV

Initialization Initialization: MPI_Init() initializes MPI environment –Must be called before any other MPI routine (so put it at the beginning of code) –Can be called only once; subsequent calls are erroneous. int MPI_Init(int *argc, char ***argv)

Termination MPI_Finalize() cleans up MPI environment –Must be called before exits. –No other MPI routine can be called after this call, even MPI_INIT()

Termination MPI_Finalize() cleans up MPI environment –Must be called before exits. –No other MPI routine can be called after this call, even MPI_INIT()

Processes MPI is process-oriented: program consists of multiple processes, each corresponding to one processor. MIMD: Each process runs its own code. In practice, runs its own copy of the same code (SPMD).. MPI processes are identified by their ranks: –If total nprocs processes in computation, rank ranges from 0, 1, …, nprocs-1. –nprocs does not change during computation.

Communicators Communicator: is a group of processes that can communicate with one another. Most MPI routines require a communicator argument to specify the collection of processes the communication is based on. All processes in the computation form the communicator MPI_COMM_WORLD. –MPI_COMM_WORLD is pre-defined by MPI, available anywhere Can create subgroups/subcommunicators within MPI_COMM_WORLD. –A process may belong to different communicators, and have different ranks in different communicators.

Size and Rank Number of processors: MPI_COMM_SIZE() Which processor: MPI_COMM_RANK() Can compute data decomposition etc. –Know total number of grid points, total number of processors and current processor id; can calculate which portion of data current processor is to work on. Ranks also used to specify source and destination of communications. int my_rank, ncpus; MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &ncpus);

Compile and Run Program Compile the MPI program mpicc –o greetings greetings.c After compiling, a executable file greetings is generated. If running on the head node mpirun –np 4./greetings Greetings from process 1! Greetings from process 2! Greetings from process 3! This is NOT allowed in HPC supercomputers.

PBS scripts PBS: Portable Batch System A cluster is shared with others –Need to use a job submission system PBS will allocate the job to some other computer, log in as the user, and execute it Useful Commands –qsub : submits a job –qstat : monitors status –qdel : deletes a job from a queue

A Job with PBS scripts vi myjob1 #!/bin/bash #PBS -N job1 #PBS -q production #PBS -l select=4:ncpus=1 #PBS -l place=free #PBS -V cd $PBS_O_WORKDIR mpirun -np 4 -machinefile $PBS_NODEFILE./greetings

Submit Jobs Submit the job qsub myjob service0 Check the job status Qstat PBS Pro Server andy.csi.cuny.edu at CUNY CSI HPC Center Job id Name User Time Use S Queue service0 methane_g09 michael.green R qlong8_gau service0 methane_g09 michael.green R qlong8_gau service0 BEAST_serial edward.myers 2373:38: R qserial service0 2xTDR e.sandoval 0 H qlong16_qdr

Submit Jobs See the output cat job1.o Greetings from process 1! Greetings from process 2! Greetings from process 3! See the error file Cat job1.e283724

PBS scripts PBS Description #PBS -N jobname Assign a name to job #PBS -M _address Specify address #PBS -m b Send at job start #PBS -m e Send at job end #PBS -m a Send at job abort #PBS -o out_file Redirect stdout to specified file #PBS -e errfile Redirect stderr to specified file #PBS -q queue_name Specify queue to be used #PBS -l select=chunk specification Specify MPI resource requirements #PBS -l walltime=runtime Set wallclock time limit