Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
High Performance Parallel Programming Dirk van der Knijff Advanced Research Computing Information Division.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
CS 179: GPU Programming Lecture 20: Cross-system communication.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
9-2.1 “Grid-enabling” applications Part 2 Using Multiple Grid Computers to Solve a Single Problem MPI © 2010 B. Wilkinson/Clayton Ferner. Spring 2010 Grid.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
12.1 Parallel Programming Types of Parallel Computers Two principal types: 1.Single computer containing multiple processors - main memory is shared,
An Introduction to MPI (message passing interface)
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Parallel Algorithms & Implementations: Data-Parallelism, Asynchronous Communication and Master/Worker Paradigm FDI 2007 Track Q Day 2 – Morning Session.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
Introduction to MPI Programming Ganesh C.N.
Introduction to parallel computing concepts and technics
Introduction to MPI.
MPI Message Passing Interface
CS 584.
More on MPI Nonblocking point-to-point routines Deadlock
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Lecture 14: Inter-process Communication
MPI: Message Passing Interface
Introduction to parallelism and the Message Passing Interface
More on MPI Nonblocking point-to-point routines Deadlock
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
Presentation transcript:

Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens

Characteristics of Distributed-Memory Machines Scalable interconnection network. Some memory physically local, some remote Message-passing programming model. Major issues: network latency and bandwidth, message-passing overhead, data domposition. Incremental parallelization can be difficult.

What is MPI? De facto standard API for explicit message- passing SPMD programming. Many implementations over many networks Developed in mid 90’s by consortium, reflecting lessons learned from machine-specific libraries and PVM. Focused on: homogeneous MPPs, high performance, library writing, portability. For more information: MPI linksMPI links

Six-Function MPI MPI_InitInitialize MPI MPI_FinalizeClose it down MPI_Comm_rankGet my process # MPI_Comm_sizeHow many total? MPI_SendSend message MPI_RecvReceive message

MPI “Hello World” in Fortran77 implicit none include 'mpif.h' integer myid, numprocs, ierr call mpi_init( ierr ) call mpi_comm_rank( mpi_comm_world, myid, ierr ) call mpi_comm_size( mpi_comm_world, numprocs, ierr ) print *, "hello from ", myid, " of ", numprocs call mpi_finalize( ierr ) stop end

MPI “Hello World” in C #include "mpi.h" #include int main(int argc, char *argv[]) { int myid, numprocs, namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name, &namelen); printf("hello from %s: process %d of %d\n", processor_name, myid, numprocs); MPI_Finalize(); }

Function MPI_Send int MPI_Send (void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_SEND(BUF, COUNT, DATATYPE, DEST, TAG, COMM, IERR) BUF(*) INTEGER COUNT, DATATYPE, DEST, TAG, COMM, IERR MPI_SEND(buf, count, datatype, dest, tag, comm) INbuf initial address of send buffer (choice) INcount number of entries to send (integer) INdatatype datatype of each entry (handle) INdest rank of destination (integer) INtag message tag (integer) INcomm communicator (handle)

Function MPI_Recv int MPI_Recv (void *buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) MPI_RECV(BUF, COUNT, DATATYPE, SOURCE, TAG, COMM, STATUS, IERR) BUF(*) INTEGER COUNT, DATATYPE, SOURCE, TAG, COMM INTEGER STATUS(MPI_STATUS_SIZE), IERR MPI_RECV(buf, count, datatype, source, tag, comm, status) OUTbuf initial address of receive buffer (choice) INcount max number of entries to receive (integer) INdatatype datatype of each entry (handle) INdest rank of source (integer) INtag message tag (integer) INcomm communicator (handle) OUTstatus return status (Status)

MPI Send and Recv Semantics These are “standard mode”, blocking calls. Send returns when buf may be re-used; Recv returns when data in buf is available. Count consecutive items of type datatype, beginning at buf, are sent to process with rank dest. Tag can be used to distinguish among messages from the same source. Messages are non-overtaking. Buffering is up to the implementation.

MPI Topics FDI 2004 Track M Day 2 – Morning Session #2 C. J. Ribbens

MPI Communicators A communicator can be thought of as a set of processes; every communication event takes place in the context of a particular communicator. MPI_COMM_WORLD is the initial set of processes (note: MPI-1 has a static process model). Why do communicators exist? –Collective operations over subsets of processes –Can define special topologies for sets of processes –Separate communication contexts for libraries.

MPI Collective Operations An operation over a communicator Must be called by every member of the communicator Three classes of collective operations: –Synchronization (MPI_Barrier) –Data movement –Collective computation

Collective Patterns (Gropp)

Collective Computation Patterns (Gropp)

Collective Routines (Gropp) Many routines: Allgather AllgathervAllreduce Alltoall AlltoallvBcast Gather GathervReduce ReduceScatter ScanScatter Scatterv All versions deliver results to all participating processes. V versions allow the chunks to have different sizes. Allreduce, Reduce, ReduceScatter, and Scan take both built-in and user-defined combination functions.

MPI topics not covered … Topologies: Cartesian and graph User-defined types Message-passing modes Intercommunicators MPI-2 topics: MPI I/O, remote memory access, dynamic process management