Introduction to parallelism and the Message Passing Interface

Slides:



Advertisements
Similar presentations
MPI Message Passing Interface
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
High Performance Computing
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Point-to-Point Communication Self Test with solution.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
ECE 1747H : Parallel Programming Message Passing (MPI)
Specialized Sending and Receiving David Monismith CS599 Based upon notes from Chapter 3 of the MPI 3.0 Standard
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 The University of Southern Mississippi April 8, 2010.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
Advanced gLite job management Paschalis Korosoglou, AUTH/GRNET EPIKH Application Porting School 2011 Beijing, China Paschalis Korosoglou,
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
It.auth | AUTH Information Technology Center ARIS Training (September 2015) | Paschalis Korosoglou Introduction to parallel computing.
Introduction to parallel computing concepts and technics
MPI Basics.
CS4402 – Parallel Computing
MPI Point to Point Communication
Introduction to MPI.
MPI Message Passing Interface
Introduction to MPI CDP.
CS 584.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
MPI-Message Passing Interface
Message Passing Models
CS 5334/4390 Spring 2017 Rogelio Long
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Lab Course CFD Parallelisation Dr. Miriam Mehl.
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Distributed Memory Programming with Message-Passing
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Programming Parallel Computers
Presentation transcript:

Introduction to parallelism and the Message Passing Interface Paschalis Korosoglou, AUTH/GRNET support@grid.auth.gr EPIKH Application Porting School 2011 Beijing, China

Overview of Parallel computing In parallel computing a program spawns several concurrent processes decrease the runtime needed to solve a problem or increase the problem size to be solved The original problem is decomposed into tasks that ideally run independently Source code development within some parallel programming environment hardware platform nature of the problem performance goals

Hardware considerations Distributed memory systems Each process (or processor) has unique address space Direct access to another processors memory not allowed Process synchronization occurs implicitly Shared memory systems Processors share the same address space knowledge of where data is stored is of no concern to the user Process synchronization is explicit Not scalable

Distributed and Shared memory

Parallel programming models Distributed memory systems Programmer uses “Message Passing” in order to sync processes and share data among them Message passing libraries MPI PVM Shared memory systems Thread based programming approach Compiler directives (i.e. openMP) Message passing may also be used

Nature of the problem Parallel computing Embarrassingly parallel (parametric studies) Multiple processes concurrently (Domain and/or functional decomposition)

Nature of the problem (examples) Functional partitioning Domain decomposition

Message Passing Interface MPI is a message passing library specification not a language or compiler specification no specific implementation A process may be defined as a program counter and an address space. Message Passing is used for communication among processes synchronization data movement between address spaces

Hello World! The most boring (yet popular) way to get started...

Starting and exiting the MPI environment MPI_Init C style: int MPI_Init(int *argc, char ***argv); accepts argc and argv variables (main arguments) F style: MPI_INIT ( IERROR ) Almost all Fortran MPI library calls have an integer return code Must be the first MPI function called in a program MPI_Finalize C style: int MPI_Finalize(); F style: MPI_FINALIZE ( IERROR )

Communicators All mpi specific communications take place with respect to a communicator Communicator: A collection of processes and a context MPI_COMM_WORLD is the predefined communicator of all processes Processes within a communicator are assigned a unique rank value

A few basic considerations How many processes are there? (C) MPI_Comm_size( MPI_COMM_WORLD, &size ); (F) MPI_COMM_SIZE( MPI_COMM_WORLD, size, ierr) Which one is which? (C) MPI_Comm_rank( MPI_COMM_WORLD, &rank ); (F) MPI_COMM_RANK( MPI_COMM_WORLD, rank, ierr) The rank number is between 0 and (size - 1)

Sending and receiving messages Questions Where is the data? What type of data? How much data is sent? To whom is the data sent? How does the receiver know which data to collect?

What is contained within a message? message data buffer count datatype message envelope source/destination rank message tag (tags are used to discriminate among messages) communicator

A simple example Program Output: from process 1 using tag 158 buf1[N-1] = 8.1 from process 0 using tag 157 buf0[N-1] = 0.9 double buf0[N], buf1[N]; MPI_Status status; int tag_of_message, src_of_message; if( rank == 0 ) { for(int i=0; i<N; i++) buf0[i] = 0.1 * (double) i; MPI_Send(buf0, N, MPI_DOUBLE, 1, 157, MPI_COMM_WORLD); MPI_Recv(buf1, N, MPI_DOUBLE, 1, 158, MPI_COMM_WORLD, &status); tag_of_message = status.MPI_TAG; src_of_message = status.MPI_SOURCE; cout << "from process " << src_of_message << " using tag " << tag_of_message << " buf1[N-1] = " << buf1[N-1] << endl; } else if ( rank == 1 ) for(int i=0; i<N; i++) buf1[i] = 0.9 * (double) i; MPI_Recv(buf0, N, MPI_DOUBLE, 0, 157, MPI_COMM_WORLD, &status); MPI_Send(buf1, N, MPI_DOUBLE, 0, 158, MPI_COMM_WORLD); tag_of_message << " buf0[N-1] = " << buf0[N-1] << endl;

Blocking communication MPI_Send does not complete until buffer is empty (available for reuse) MPI_Recv does not complete until buffer is full (available for use) MPI uses internal buffers (the envelope) to pack messages, thus short messages do not produce deadlocks To avoid deadlocks one either reverses the Send/Receive calls on one end or uses the Non-Blocking calls (MPI_Isend or MPI_Irecv respectively followed by MPI_Wait)

Collective communications All processes within the specified communicator participate All collective operations are blocking All processes must call the collective operation No message tags are used Three classes of collective communications Data movement Collective computation Synchronization

Hands-on exercises $ ssh beijingXX@10.1.1.177 $ cd epikh-application-porting-school-2011 $ svn update

Parallel Pi computation Monte Carlo integration N random points pi in the [0,1]x[0,1] domain Count hits M with r(pi)<1 Pi is approximated with 4M/N

Compile and execute a “mm” job Master process decomposes matrix a and provides slave processes with input. Each slave process caries ns=nm/sz rows of a and the complete b matrix to carry out computations. Results are sent back to master who prints out timing.