Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.

Slides:



Advertisements
Similar presentations
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory.
Advertisements

MPI Message Passing Interface
CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Introduction to Message Passing Interface (MPI) Part I SoCal Annual AAP workshop.
Reference: / MPI Program Structure.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
1 Parallel Computing—Introduction to Message Passing Interface (MPI)
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
1 Message Passing Programming (MPI). 2 What is MPI? A message-passing library specification extended message-passing model not a language or compiler.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Using MPI - the Fundamentals University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
CS 179: GPU Programming Lecture 20: Cross-system communication.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
CS6235 L17: Design Review and 6-Function MPI. L17: DRs and MPI 2 CS6235 Administrative Organick Lecture: TONIGHT -David Shaw, “Watching Proteins Dance:
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
1 Review –6 Basic MPI Calls –Data Types –Wildcards –Using Status Probing Asynchronous Communication Collective Communications Advanced Topics –"V" operations.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
An Introduction to MPI Parallel Programming with the Message Passing Interface Prof S. Ramachandram.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
Parallel Computing in Numerical Simulation of Laser Deposition The objective of this proposed project is to research and develop an effective prediction.
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
MPI Adriano Cruz ©2003 NCE/UFRJ e Adriano Cruz NCE e IM - UFRJ Summary n References n Introduction n Point-to-point communication n Collective.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
Outline Background Basics of MPI message passing
Introduction to parallel computing concepts and technics
Introduction to MPI.
MPI Message Passing Interface
CS4961 Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, /02/2010 CS4961.
CS 584.
MPI: The Message-Passing Interface
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
MPI: Message Passing Interface
Introduction to parallelism and the Message Passing Interface
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center

2 Talk Overview Background on MPI Documentation Hello world in MPI Basic communications Simple send and receive program

3 Background on MPI MPI - Message Passing Interface –Library standard defined by a committee of vendors, implementers, and parallel programmers –Used to create parallel programs based on message passing 100% portable: one standard, many implementations Available on almost all parallel machines in C and Fortran Over 100 advanced routines but 6 basic

4 Documentation MPI home page (contains the library standard): Books "MPI: The Complete Reference" by Snir, Otto, Huss-Lederman, Walker, and Dongarra, MIT Press (also in Postscript and html) "Using MPI" by Gropp, Lusk and Skjellum, MIT Press Tutorials many online, just do a search

5 MPI Implementations Most parallel supercomputer vendors provide optimized implementations Others: – –www-unix.mcs.anl.gov/mpi/mpich –GLOBUS:

6 Key Concepts of MPI Used to create parallel programs based on message passing –Normally the same program is running on several different processors –Processors communicate using message passing Typical methodology: start job on n processors do i=1 to j each processor does some calculation pass messages between processor end do end job

7 Messages Simplest message: an array of data of one type. Predefined types correspond to commonly used types in a given language –MPI_REAL (Fortran), MPI_FLOAT (C) –MPI_DOUBLE_PRECISION (Fortran), MPI_DOUBLE (C) –MPI_INTEGER (Fortran), MPI_INT (C) User can define more complex types and send packages.

8 Communicators Communicator –A collection of processors working on some part of a parallel job –Used as a parameter for most MPI calls –MPI_COMM_WORLD includes all of the processors in your job –Processors within a communicator are assigned numbers (ranks) 0 to n-1 –Can create subsets of MPI_COMM_WORLD

9 Include files The MPI include file –C: mpi.h –Fortran: mpif.h (a f90 module is a good place for this) Defines many constants used within MPI programs In C defines the interfaces for the functions Compilers know where to find the include files

10 Minimal MPI program Every MPI program needs these… –C version /* the mpi include file */ #include int nPEs,ierr,iam; /* Initialize MPI */ ierr=MPI_Init(&argc, &argv); /* How many processors (nPEs) are there?*/ ierr=MPI_Comm_size(MPI_COMM_WORLD, &nPEs); /* What processor am I (what is my rank)? */ ierr=MPI_Comm_rank(MPI_COMM_WORLD, &iam);... ierr=MPI_Finalize(); In C MPI routines are functions and return an error value

11 Minimal MPI program Every MPI program needs these… –Fortran version ! MPI include file include 'mpif.h' integer nPEs, ierr, iam ! Initialize MPI call MPI_Init(ierr) ! How many processors (nPEs) are there? call MPI_Comm_size(MPI_COMM_WORLD, nPEs, ierr) ! What processor am I (what is my rank)? call MPI_Comm_rank(MPI_COMM_WORLD, iam, ierr)... call MPI_Finalize(ierr) In Fortran, MPI routines are subroutines, and last parameter is an error value

12 Exercise 1 : Hello World Write a parallel “hello world” program –Initialize MPI –Have each processor print out “Hello, World” and its processor number (rank) –Quit MPI

13 Basic Communication Data values are transferred from one processor to another –One processor sends the data –Another receives the data Synchronous –Call does not return until the message is sent or received Asynchronous –Call indicates a start of send or receive, and another call is made to determine if finished

14 Synchronous Send C –MPI_Send(&buffer, count,datatype, destination, tag,communicator); Fortran –Call MPI_Send(buffer, count, datatype, destination,tag,communicator, ierr) Call blocks until message on the way

15 Call MPI_Send(buffer, count, datatype, destination, tag, communicator, ierr) Buffer: The data array to be sent Count : Length of data array (in elements, 1 for scalars) Datatype : Type of data, for example : MPI_DOUBLE_PRECISION, MPI_INT, etc Destination : Destination processor number (within given communicator) Tag : Message type (arbitrary integer) Communicator : Your set of processors Ierr : Error return (Fortran only)

16 Synchronous Receive C –MPI_Recv(&buffer,count, datatype, source, tag, communicator, &status); Fortran –Call MPI_ RECV(buffer, count, datatype, source,tag,communicator, status, ierr) Call blocks the program until message is in buffer Status - contains information about incoming message –C MPI_Status status; –Fortran Integer status(MPI_STATUS_SIZE)

17 Call MPI_Recv(buffer, count, datatype, source, tag, communicator, status, ierr) Buffer: The data array to be received Count : Maximum length of data array (in elements, 1 for scalars) Datatype : Type of data, for example : MPI_DOUBLE_PRECISION, MPI_INT, etc Source : Source processor number (within given communicator) Tag : Message type (arbitrary integer) Communicator : Your set of processors Status: Information about message Ierr : Error return (Fortran only)

18 Exercise 2 : Basic Send and Receive Write a parallel program to send & receive data –Initialize MPI –Have processor 0 send an integer to processor 1 –Have processor 1 receive an integer from processor 0 –Both processors print the data –Quit MPI

19 Summary MPI is used to create parallel programs based on message passing Usually the same program is run on multiple processors The 6 basic calls in MPI are: –MPI_INIT( ierr ) –MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr ) –MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr ) –MPI_Send(buffer, count,MPI_INTEGER,destination, tag, MPI_COMM_WORLD, ierr) –MPI_Recv(buffer, count, MPI_INTEGER,source,tag, MPI_COMM_WORLD, status,ierr) –call MPI_FINALIZE(ierr)