MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.

Slides:



Advertisements
Similar presentations
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Advertisements

Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
High Performance Computing
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
Parallel Programming in C with MPI and OpenMP
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
Message Passing Interface. Message Passing Interface (MPI) Message Passing Interface (MPI) is a specification designed for parallel applications. The.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
Parallel Processing1 Parallel Processing (CS 676) Lecture 7: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived from chapters.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
2.1 Message-Passing Computing ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 14, 2013.
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
CS 240A Models of parallel programming: Distributed memory and MPI.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
MPI Send/Receive Blocked/Unblocked Tom Murphy Director of Contra Costa College High Performance Computing Center Message Passing Interface BWUPEP2011,
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Message-passing Model.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI CDP 1. Shared Memory vs. Message Passing Shared Memory Implicit communication via memory operations (load/store/lock) Global address.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
MPI - A User level Message-passing Interface K. Ramakrishna Shenai.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Interface Using resources from
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
Introduction to MPI Programming Ganesh C.N.
MPI Basics.
Message-Passing Computing
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing Message Passing Interface (cont.) Topologies.
MPI Message Passing Interface
CS 668: Lecture 3 An Introduction to MPI
Introduction to MPI CDP.
CS 584.
MPI: The Message-Passing Interface
Introduction to Message Passing Interface (MPI)
CS 5334/4390 Spring 2017 Rogelio Long
CSCE569 Parallel Computing
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
NOTE: FOR CLASS USE MPICH Version of MPI!!
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
Some codes for analysis and preparation for programming
Presentation transcript:

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Preliminaries To set-up a cluster we must: Configure the individual computers Establish some form of communication between machines Run the program(s) that exploit the above MPI is all about exploitation 2

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University So what does MPI do? Simply stated: MPI allows moving data and commands between processes Data that is needed for a computation or from a computation Now just wait a second, shouldn’t that be processors? 3

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Simple or Complex? MPI has 100+ very complex library calls 52 Point-to-Point Communication 16 Collective Communication 30 Groups, Contexts, and Communicators 16 Process Topologies 13 Environmental Inquiry 1 Profiling MPI only needs 6 very simple (complex) library calls 4

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Six Basic MPI commands Start and stop MPI_Init(...) MPI_Finalize(...) Know yourself and others MPI_Comm_rank(...) MPI_Comm_size(...) Message Passing MPI_Send(...) MPI_Recv(...) 5

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Essential MPI Organization Data Representation is Standardized MPI data types Harnessing Processes for a Task MPI Communicators Specifying a kind of message MPI Tags How many: Processes and Processors -np -machinefile 6

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Data Representation Signed MPI_CHAR MPI_SHORT MPI_INT MPI_LONG Unsigned MPI_UNSIGNED_CHAR MPI_UNSIGNED_SHORT MPI_UNSIGNED MPI_UNSIGNED_LONG 7

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Data Representation MPI_FLOAT MPI_DOUBLE MPI_LONG_DOUBLE 8

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Data Representation MPI_BYTE Device independent Exactly 8 bits MPI_PACKED Allows non-contiguous data MPI_Pack(...) MPI_Unpack(...) 9

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Under the hood of the Six MPI_Init(int *argc, char ***argv) We gotta change (int argc, char **argv) since MPI uses it to pass data to all machines MPI_Finalize() 10

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Under the hood of the Six MPI_Comm_rank(MPI_Comm comm, int *rank) MPI_Comm_size(MPI_Comm comm, int *size) 11

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University Under the hood of the Six MPI_Send( void* buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) MPI_Recv( void* buf, int count, MPI_Datatype datatype, int source, int tag, MPI_Comm comm, MPI_Status *status) 12

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Hello World Fire up a qsub interactive shell on AC cp ~tra5/mpihello.c. qsub –I mpdstartup mpicc –o mpihello mpihello.c mpirun -np 4./mpihello 13

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Hello World 1. Including the Right Stuff 2. General Declarations 3. MPI Setup 4. Client-side Code 5. Server-side Code 6. The Grand Finale 14

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Hello World #include #define SERVER 0 #define TAG 2 15

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Hello World int main(int argc, char **argv) { int my_rank, world_size; int destination=SERVER, source; int tag=TAG, length; char message[256], name[80]; MPI_Status status; 16

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Hello World MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &world_size); MPI_Get_processor_name(name, &length); sprintf(message, "Process %d (%d of %d) on %s!", my_rank, my_rank+1, world_size, name); 17

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Hello World if (my_rank != SERVER) { // Client Section fprintf(stderr, "Client: process %d\n", my_rank); MPI_Send(message, strlen(message) + 1, MPI_CHAR, destination, tag, MPI_COMM_WORLD); 18

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Hello World } else { // Server Section fprintf(stderr, "Server: process %d\n", my_rank); fprintf(stderr, "%s\n", message); for (source = 1; source < world_size; source++) { MPI_Recv(message, 256, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status); fprintf(stderr, "%s\n", message); } 19

MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Hello World fprintf(stderr, "Calling Finalize %d\n", my_rank); MPI_Finalize(); } 20