MPI Adriano Cruz ©2003 NCE/UFRJ e Adriano Cruz NCE e IM - UFRJ Summary n References n Introduction n Point-to-point communication n Collective.

Slides:



Advertisements
Similar presentations
1 Computer Science, University of Warwick Distributed Shared Memory Distributed Shared Memory (DSM) Systems build the shared memory abstract on top of.
Advertisements

1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory.
MPI Message Passing Interface Portable Parallel Programs.
MPI Message Passing Interface
1 Introduction to Collective Operations in MPI l Collective operations are called by all processes in a communicator. MPI_BCAST distributes data from one.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
CS 240A: Models of parallel programming: Distributed memory and MPI.
SOME BASIC MPI ROUTINES With formal datatypes specified.
Portability Issues. The MPI standard was defined in May of This standardization effort was a response to the many incompatible versions of parallel.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
Using MPI - the Fundamentals University of North Carolina - Chapel Hill ITS - Research Computing Instructor: Mark Reed
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
Distributed Systems CS Programming Models- Part II Lecture 17, Nov 2, 2011 Majd F. Sakr, Mohammad Hammoud andVinay Kolar 1.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Today Objectives Chapter 6 of Quinn Creating 2-D arrays Thinking about “grain size” Introducing point-to-point communications Reading and printing 2-D.
Parallel Programming with Java
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 MPI Datatypes l The data in a message to sent or received is described by a triple (address, count, datatype), where l An MPI datatype is recursively.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory Presenter: Mike Slavik.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 17, 2012.
Parallel Programming and Algorithms – MPI Collective Operations David Monismith CS599 Feb. 10, 2015 Based upon MPI: A Message-Passing Interface Standard.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
HPCA2001HPCA Message Passing Interface (MPI) and Parallel Algorithm Design.
MIMD Distributed Memory Architectures message-passing multicomputers.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Non-Data-Communication Overheads in MPI: Analysis on Blue Gene/P P. Balaji, A. Chan, W. Gropp, R. Thakur, E. Lusk Argonne National Laboratory University.
MPI (continue) An example for designing explicit message passing programs Advanced MPI concepts.
Parallel Programming with MPI By, Santosh K Jena..
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
MPI: Portable Parallel Programming for Scientific Computing William Gropp Rusty Lusk Debbie Swider Rajeev Thakur.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
MPI Workshop - III Research Staff Cartesian Topologies in MPI and Passing Structures in MPI Week 3 of 3.
1 BİL 542 Parallel Computing. 2 Message Passing Chapter 2.
FIT5174 Parallel & Distributed Systems Dr. Ronald Pose Lecture FIT5174 Distributed & Parallel Systems Lecture 5 Message Passing and MPI.
An Introduction to MPI (message passing interface)
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Programming Based on MPI Collective Communication I Bora AKAYDIN
Message Passing Interface Using resources from
1 Advanced MPI William D. Gropp Rusty Lusk and Rajeev Thakur Mathematics and Computer Science Division Argonne National Laboratory.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
3/12/2013Computer Engg, IIT(BHU)1 MPI-2. POINT-TO-POINT COMMUNICATION Communication between 2 and only 2 processes. One sending and one receiving. Types:
Research Staff Passing Structures in MPI Week 3 of 3
Introduction to parallel computing concepts and technics
MPI: Portable Parallel Programming for Scientific Computing
MPI Message Passing Interface
CS4961 Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, /02/2010 CS4961.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
More on MPI Nonblocking point-to-point routines Deadlock
MPI-Message Passing Interface
Introduction to parallelism and the Message Passing Interface
MPI Message Passing Interface
Presentation transcript:

MPI Adriano Cruz ©2003 NCE/UFRJ e IM/UFRJ

@2001 Adriano Cruz NCE e IM - UFRJ Summary n References n Introduction n Point-to-point communication n Collective communication n Groups and Communicators n MPI Implementations n Comparing MPI and PVM

MPI References

@2001 Adriano Cruz NCE e IM - UFRJ Resources n n William Gropp, Ewing Lusk, Anthony Skjellum. Using MPI – Portable Parallel Programming with the Message-Passing Interface. The MIT Press n Marc Snir, Steve Otto, Steven Huss- Lederman, David Walker, Jack Dongarra. MPI The Complete Reference. The MIT Press. Available by ftp from ftp.netlib.org (cd utk/papers/mpi-book; get mpi-book.ps; quit)ftp.netlib.org

MPI Introduction

@2001 Adriano Cruz NCE e IM - UFRJ Message-Passing Systems n PICL n PVM n Chamaleon n Express

@2001 Adriano Cruz NCE e IM - UFRJ History n April The Center for Research in Parallel Computation sponsored a workshop in Distributed Environment n November At Supercomputing conference a committee was formed to define a message-passing standard n MPI Standard completed

@2001 Adriano Cruz NCE e IM - UFRJ MPI Forum goals n Define a portable standard for message passing. It would not be an official, ANSI-like standard n Operate on a completely open way. Anyone would be free to join the discussions. n Be finished in one year

@2001 Adriano Cruz NCE e IM - UFRJ First ones n Parallel computer vendors: IBM, Intel, Cray, NEC, Convex, etc n Portable libraries creators: PVM, p4, Chamaleon, Express, etc. n A Number of parallel application specialists

@2001 Adriano Cruz NCE e IM - UFRJ Primary Goal n The primary goal of the MPI specification is to demonstrate that that users need not to compromise among efficiency, portability and functionality.

@2001 Adriano Cruz NCE e IM - UFRJ What is not new? n MPI is a library. n MPI is an attempt to collect the best features of many message-passing systems. n A computation is a collection of processes communicating using messages

@2001 Adriano Cruz NCE e IM - UFRJ Basic message concepts - send send(address, lenght, destination, tag) address is the beginning of the buffer containing the data to be sent length is the length, in bytes, of the message destination is the process identifier of the process to which this message is sent tag is an arbitrary nonnegative integer to restrict receipt of the message

@2001 Adriano Cruz NCE e IM - UFRJ System requirements n Supply queuing capabilities so that a receive operation such as recv(address, maxlen, source, tag, actlen); n will complete successfully only if a message is received with a correct tag.

@2001 Adriano Cruz NCE e IM - UFRJ Receive recv(address, maxlen, source, tag, actlen); n address and maxlen define the receiving buffer. n actlen is the actual number of bytes received

@2001 Adriano Cruz NCE e IM - UFRJ Message Buffers Problems n The (address, length) specification was good match for early hardware n In many situations the message is not contiguous n In many heterogeneous systems the data types vary

@2001 Adriano Cruz NCE e IM - UFRJ MPI solution n MPI specifies messages at a higher level and in a more flexible way n MPI messages are defined as a triple (address, cont, datatype) n MPI provides functions that give users the power to construct their own datatypes

@2001 Adriano Cruz NCE e IM - UFRJ Separating Families of Messages n Early systems provides a tag argument so that users can deal with messages in an organized way. n The problem is that users have to agree to define tags in a predefined way n Libraries are a problem

@2001 Adriano Cruz NCE e IM - UFRJ MPI solution n Instead of tags use the concept of context n Contexts are allocated at run time by the system n No wild card is allowed n The message tag with wild cards concept is retained

@2001 Adriano Cruz NCE e IM - UFRJ Naming Processes n Processes belong to groups n Processes are identified by ranks (0..n- 1), n is the number of processes. n There is an initial group to which all process belong

@2001 Adriano Cruz NCE e IM - UFRJ Communicators n Context and groups are combined into a object called communicator n The destination or source always refer to the rank of the process in the group identified with the given communicator

@2001 Adriano Cruz NCE e IM - UFRJ MPI send n MPI_Send (buf, count, datatype, dest, tag, comm) count occurrences of datatype starting at buffer Dest is the rank of the destination in the group associated with the communicator comm tag is the usual tag comm identifies a group of processes and a communication context

@2001 Adriano Cruz NCE e IM - UFRJ MPI receive n MPI_Recv(buf, count, datatype, source, tag, comm, status);

@2001 Adriano Cruz NCE e IM - UFRJ Other features n Collective Communications: broadcast, scatter and gather n Collective Computations: maximum, minimum, sum,logical, etc n Mechanism for creating