ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
High Performance Computing
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
Task Farming on HPCx David Henty HPCx Applications Support
Parallel Programming AMANO, Hideharu. Parallel Programming Message Passing  PVM  MPI Shared Memory  POSIX thread  OpenMP  CUDA/OpenCL Automatic Parallelizing.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
1 Copyright © 2010, Elsevier Inc. All rights Reserved Chapter 3 Distributed Memory Programming with MPI An Introduction to Parallel Programming Peter Pacheco.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Chapter 4 Message-Passing Programming. The Message-Passing Model.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Implementing Processes and Threads CS550 Operating Systems.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
MPI Groups, Communicators and Topologies. Groups and communicators In our case studies, we saw examples where collective communication needed to be performed.
Message Passing Interface Using resources from
Introduction to MPI programming Morris Law, SCID May 18/25, 2013.
1 Advanced MPI William D. Gropp Rusty Lusk and Rajeev Thakur Mathematics and Computer Science Division Argonne National Laboratory.
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
Lecture 3 Point-to-Point Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied.
PVM and MPI.
Introduction to parallel computing concepts and technics
MPI Basics.
Introduction to MPI.
MPI Message Passing Interface
CS 584.
An Introduction to Parallel Programming with MPI
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
Lab Course CFD Parallelisation Dr. Miriam Mehl.
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Distributed Memory Programming with Message-Passing
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Parallel Processing - MPI
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 9 October 30, 2002 Nayda G. Santiago

Announcement Daniel Burbano Projects Attendance List Registrar’s office We will go to the lab

Overview MPI basic functions Reference MPICH home page Jack Dongarra’s homepage

Getting started with MPI MPI contains 125 routines (more with extensions!) Many programs can be written with only six (6) MPI routines Upon startup, all processes can be identified by their rank, which goes from 0 to N-1, where there are N processes

MPI – Basic functions These six functions allow you to write many programs MPI_init - Initialize MPI MPI_finalize - Terminate MPI MPI_comm_size – How many processes are running? MPI_comm_rank – What is my process number? MPI_send – Send a message MPI_recv – Receive a message

Basic MPI: MPI_INIT MPI_INIT must be the first MPI routine called in any program MPI_INIT(ierr) Ierr: integer error return value. 0: success Non-zero: failure Can only be called once Sets up the environment to enable message passing

Basic MPI: MPI_FINALIZE MPI_FINALIZE must be called by each process before it exits MPI_FINALIZE(ierr) Ierr: integer error return value. 0: success Non-zero: failure No other MPI routine can be called after MPI_FINALIZE All pending communication must be completed before calling MPI_FINALIZE

MPI Basic Program Structure program main include ‘mpi.h’ integer ierr call MPI_INIT(ierr) do some work call MPI_FINALIZE(ierr) maybe do some additional local computation end #include “mpi.h” int main() { MPI_init() do some work MPI_finalize() maybe do some additional local computation ---- }

Groups and Communicators We will not be using this, but it is important so that you understand the routines Groups can be thought of as sets of processes These groups are associated with what are called “communicators” Upon startup, there is a single set of processes associated with the communicator MPI_COMM_WORLD Groups can be created which are sub-sets of this original group, also associated with communicators

MPI_COMM_RANK(comm, rank, ierr) comm: Integer communicator. We will always use MPI_COMM_WORLD rank: Returned rank of calling process ierr: Integer error return code This routine returns the relative rank of the calling process, within the group associated with comm.

MPI_COMM_SIZE(comm, size, ierr) comm: Integer communicator identifier size: Upon return, the number of processes in the group associated with comm. For our purposes, always the total number of processes This routine returns the number of processes in the group associated with comm

A very simple program: Hello World program main include ‘mpi.h’ integer ierr, size, rank call MPI_INIT(ierr) call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierr) print *, ‘Hello world from process’, rank, ‘of’, size call MPI_FINALIZE(ierr) end

Hello World  Mpirun –np 4 a.out  Hello World from 2 of 4  Hello World from 0 of 4  Hello World from 1 of 4  Hello World from 3 of 4  Mpirun –np 4 a.out  Hello World from 3 of 4  Hello World from 1 of 4  Hello World from 2 of 4  Hello World from 0 of 4

Progress Report Report due next week: Nov. 6, 2002 before midnight, by Format: PDF, PS, DOC Follow ‘Writing Formal Reports: An Approach for Engineering Students in 21st Century, 3rd Edition’ Contents: Title page Abstract – Informative abstract Table of contents Introduction Discussion Time schedule and what have you completed so far. Future work Details, what remains to be done References