Download presentation
Presentation is loading. Please wait.
Published byMorgan Parker Modified over 8 years ago
1
MPI Adriano Cruz ©2003 NCE/UFRJ e IM/UFRJ
2
@2001 Adriano Cruz NCE e IM - UFRJ Summary n References n Introduction n Point-to-point communication n Collective communication n Groups and Communicators n MPI Implementations n Comparing MPI and PVM
3
MPI References
4
@2001 Adriano Cruz NCE e IM - UFRJ Resources n www.mpi-forum.org www.mpi-forum.org n William Gropp, Ewing Lusk, Anthony Skjellum. Using MPI – Portable Parallel Programming with the Message-Passing Interface. The MIT Press n Marc Snir, Steve Otto, Steven Huss- Lederman, David Walker, Jack Dongarra. MPI The Complete Reference. The MIT Press. Available by ftp from ftp.netlib.org (cd utk/papers/mpi-book; get mpi-book.ps; quit)ftp.netlib.org
5
MPI Introduction
6
@2001 Adriano Cruz NCE e IM - UFRJ Message-Passing Systems n PICL n PVM n Chamaleon n Express
7
@2001 Adriano Cruz NCE e IM - UFRJ History n April 1992 - The Center for Research in Parallel Computation sponsored a workshop in Distributed Environment n November 1992 - At Supercomputing conference a committee was formed to define a message-passing standard n 1994 - MPI Standard completed
8
@2001 Adriano Cruz NCE e IM - UFRJ MPI Forum goals n Define a portable standard for message passing. It would not be an official, ANSI-like standard n Operate on a completely open way. Anyone would be free to join the discussions. n Be finished in one year
9
@2001 Adriano Cruz NCE e IM - UFRJ First ones n Parallel computer vendors: IBM, Intel, Cray, NEC, Convex, etc n Portable libraries creators: PVM, p4, Chamaleon, Express, etc. n A Number of parallel application specialists
10
@2001 Adriano Cruz NCE e IM - UFRJ Primary Goal n The primary goal of the MPI specification is to demonstrate that that users need not to compromise among efficiency, portability and functionality.
11
@2001 Adriano Cruz NCE e IM - UFRJ What is not new? n MPI is a library. n MPI is an attempt to collect the best features of many message-passing systems. n A computation is a collection of processes communicating using messages
12
@2001 Adriano Cruz NCE e IM - UFRJ Basic message concepts - send send(address, lenght, destination, tag) address is the beginning of the buffer containing the data to be sent length is the length, in bytes, of the message destination is the process identifier of the process to which this message is sent tag is an arbitrary nonnegative integer to restrict receipt of the message
13
@2001 Adriano Cruz NCE e IM - UFRJ System requirements n Supply queuing capabilities so that a receive operation such as recv(address, maxlen, source, tag, actlen); n will complete successfully only if a message is received with a correct tag.
14
@2001 Adriano Cruz NCE e IM - UFRJ Receive recv(address, maxlen, source, tag, actlen); n address and maxlen define the receiving buffer. n actlen is the actual number of bytes received
15
@2001 Adriano Cruz NCE e IM - UFRJ Message Buffers Problems n The (address, length) specification was good match for early hardware n In many situations the message is not contiguous n In many heterogeneous systems the data types vary
16
@2001 Adriano Cruz NCE e IM - UFRJ MPI solution n MPI specifies messages at a higher level and in a more flexible way n MPI messages are defined as a triple (address, cont, datatype) n MPI provides functions that give users the power to construct their own datatypes
17
@2001 Adriano Cruz NCE e IM - UFRJ Separating Families of Messages n Early systems provides a tag argument so that users can deal with messages in an organized way. n The problem is that users have to agree to define tags in a predefined way n Libraries are a problem
18
@2001 Adriano Cruz NCE e IM - UFRJ MPI solution n Instead of tags use the concept of context n Contexts are allocated at run time by the system n No wild card is allowed n The message tag with wild cards concept is retained
19
@2001 Adriano Cruz NCE e IM - UFRJ Naming Processes n Processes belong to groups n Processes are identified by ranks (0..n- 1), n is the number of processes. n There is an initial group to which all process belong
20
@2001 Adriano Cruz NCE e IM - UFRJ Communicators n Context and groups are combined into a object called communicator n The destination or source always refer to the rank of the process in the group identified with the given communicator
21
@2001 Adriano Cruz NCE e IM - UFRJ MPI send n MPI_Send (buf, count, datatype, dest, tag, comm) count occurrences of datatype starting at buffer Dest is the rank of the destination in the group associated with the communicator comm tag is the usual tag comm identifies a group of processes and a communication context
22
@2001 Adriano Cruz NCE e IM - UFRJ MPI receive n MPI_Recv(buf, count, datatype, source, tag, comm, status);
23
@2001 Adriano Cruz NCE e IM - UFRJ Other features n Collective Communications: broadcast, scatter and gather n Collective Computations: maximum, minimum, sum,logical, etc n Mechanism for creating
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.