Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Deino MPI Installation The famous “cpi.c” Profiling
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
1 Message Passing Programming (MPI). 2 What is MPI? A message-passing library specification extended message-passing model not a language or compiler.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
ECE 1747H : Parallel Programming Message Passing (MPI)
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
CS 484. Message Passing Based on multi-processor Set of independent processors Connected via some communication net All communication between processes.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Lecture 6: Message Passing Interface (MPI). Parallel Programming Models Message Passing Model Used on Distributed memory MIMD architectures Multiple processes.
An Introduction to MPI Parallel Programming with the Message Passing Interface Prof S. Ramachandram.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
CSE 160 – Lecture 16 MPI Concepts, Topology and Synchronization.
2.1 Collective Communication Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations: MPI_BCAST()
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
COMP7330/7336 Advanced Parallel and Distributed Computing MPI Programming - Exercises Dr. Xiao Qin Auburn University
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
1 MPI: Message Passing Interface Prabhaker Mateti Wright State University.
Outline Background Basics of MPI message passing
Introduction to parallel computing concepts and technics
CS4402 – Parallel Computing
Introduction to MPI.
MPI Message Passing Interface
Special Jobs: MPI Alessandro Costa INAF Catania
Send and Receive.
CS 584.
Send and Receive.
MPI-Message Passing Interface
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
MPI: Message Passing Interface
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to parallelism and the Message Passing Interface
ITCS 4/5145 Parallel Computing, UNC-Charlotte, B
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges

Background Parallel Processing: –Separate workers or processes –Interact by exchanging information All use different data for each worker –Data-parallel Same operations on different data. Also called SIMD –SPMD Same program, different data –MIMD Different programs, different data

Types of Communication Processes: –Cooperative --- all parties agree to transfer data –One sided --- one worker performs transfer of data

Message Passing… Message-passing is an approach that makes the exchange of data cooperative.

One-sided… One-sided operations between parallel processes include remote memory reads and writes.

What is MPI? MESSAGE PASSING INTERFACE A message-passing library specification -- message-passing model -- not a compiler specification -- not a specific product For parallel computers, clusters, and heterogeneous networks Designed to permit the development of parallel software libraries Designed to provide access to advanced parallel hardware for -- end users -- library writers -- tool developers

Features of MPI General -- Communicators combine context and group for message security -- Thread safety Point-to-point communication -- Structured buffers and derived datatypes, heterogeneity -- Modes: normal (blocking and non-blocking), synchronous, ready (to allow access to fast protocols), buffered Collective -- Both built-in and user-defined collective operations -- Large number of data movement routines -- Subgroups defined directly or by topology

Features… Application-oriented process topologies -- Built-in support for grids and graphs (uses groups) Profiling -- Hooks allow users to intercept MPI calls to install their own tools Environmental -- inquiry -- error control

Features not in MPI Non-message-passing concepts not included: -- process management -- remote memory transfers -- active messages -- threads -- virtual shared memory

MPI’s essence A process is (traditionally) a program counter and address space. Processes may have multiple threads (program counters and associated stacks) sharing a single address space. MPI is for communication among processes, which have separate address spaces. Interprocess communication consist of –Synchronization –Movement of data from one process’s address space to another’s

MPI Basics MPI can solve a wide range of problems using only six (6) functions: –MPI_INIT : Initiate an MPI computation. MPI_FINALIZE : Terminate a computation. –MPI_COMM_SIZE : Determine number of processes. –MPI_COMM_RANK : Determine my process identifier. –MPI_SEND : Send a message. –MPI_RECV : Receive a message.

Function Definitions MPI_INIT(int *argc, char ***argv) –Initiate a computation argc, argv are required only in the C language binding, where they are the main program’s arguments MPI_FINALIZE() –Shut down a computation MPI_COMM_SIZE(comm, size) –Determines the number of processes in a computation. INcomncommunicator(handle) OUTsizenumber of processes in the group of comm (integer) MPI_COMM_RANK(comm, pid) –Determine the identifier of the current process INcommcommunicator (handle) OUTpidprocess id in the group of comm (integer)

MPI in Simple C Code #include "mpi.h" #include int main( int argc, char *argv[] ) { MPI_Init( &argc, &argv ); printf( "Hello world\n" ); MPI_Finalize(); return 0; }

Function Definitions… MPI_SEND(buf, count, datatype, dest, tag, comm) –Send a message. INbuf address of send buffer (choice) INcountnumber of elements to send (integer >= 0) INdatatypedatatype of send buffer elements (handle) INdestprocess id of destination process (integer) INtagmessage tag (integer) INcommcommunicator (handle) MPI_RECV –Receive a message. INbuf address of receive buffer (choice) INcountsize of receive buffer, in elements (integer >= 0) INdatatypedatatype of receive buffer elements (handle) INsourceprocess id of source process (integer) INtagmessage tag (integer) INcommcommunicator (handle)

MPI Common Terminology Processes can be collected into groups Each message is sent in a context and must be received in the same context A group and a context together form a communicator A process is identified by its rank in the group associated with a communicator There is a default communicator whose group contains all initial processes, called MPI_COMM_WORLD

MPI Basic Send and Receive To whom is data sent? What is sent? How does the receiver identify it?

Collective Communication Barrier: Synchronizes all processes. Broadcast: Sends data from one process to all processes. Gather: Gathers data from all processes to one process. Scatter: Scatters data from one process to all processes. Reduction operations: Sums, multiplies, etc., distributed data.

Collective Functions 1.MPI_BCAST to broadcast the problem size parameter ( size) from process 0 to all np processes; 2.MPI_SCATTER to distribute an input array ( work) from process 0 to other processes, so that each process receives size/np elements; 3.MPI_SEND and MPI_RECV for exchange of data (a single floating-point number) with neighbors; 4.MPI_ALLREDUCE to determine the maximum of a set of localerr values computed at the different processes and to distribute this maximum value to each process; and 5.MPI_GATHER to accumulate an output array at process 0.

MPI C Collective Code #include "mpi.h" #include int main(argc,argv) int argc; char *argv[]; { int done = 0, n, myid, numprocs, i, rc; double PI25DT = ; double mypi, pi, h, sum, x, a; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); while (!done) { if (myid == 0) { printf("Enter the number of intervals: (0 quits) "); scanf("%d",&n); } MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); if (n == 0) break; h = 1.0 / (double) n; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { x = h * ((double)i - 0.5); sum += 4.0 / (1.0 + x*x); } mypi = h * sum; MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); } MPI_Finalize(); }

Follow-up Asynchronous Communication Modularity Data types + Heterogeneity Buffering Issues + Quality of Service (Qos) MPI Implementation: MPICH2 :: OpenMPI Parallel Program Structure for Project 18

References [1] [2] unix.mcs.anl.gov/mpi/tutorial/gropp/node23.html#Node2 3http://www- unix.mcs.anl.gov/mpi/tutorial/gropp/node23.html#Node2 3 [3] unix.mcs.anl.gov/mpi/tutorial/gropp/talk.html#Node0http://www- unix.mcs.anl.gov/mpi/tutorial/gropp/talk.html#Node0 [4] unix.mcs.anl.gov/mpi/tutorial/mpiintro/ppframe.htmhttp://www- unix.mcs.anl.gov/mpi/tutorial/mpiintro/ppframe.htm [5]