Tutorial on MPI Experimental Environment for ECE5610/CSC6220 1.

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

CS 140: Models of parallel programming: Distributed memory and MPI.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
New MPI Library on the cluster Since WSU’s Grid had an upgrade of its operating system recently, we need to use a new MPI Library to compile and run our.
High Performance Computing
MPI Fundamentals—A Quick Overview Shantanu Dutt ECE Dept., UIC.
Introduction MPI Mengxia Zhu Fall An Introduction to MPI Parallel Programming with the Message Passing Interface.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
1 Tuesday, October 10, 2006 To err is human, and to blame it on a computer is even more so. -Robert Orben.
1 An Introduction to MPI Parallel Programming with the Message Passing Interface Originally by William Gropp and Ewing Lusk Adapted by Anda Iamnitchi.
1 CS4402 – Parallel Computing Lecture 2 MPI – Getting Started. MPI – Point to Point Communication.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
CS6235 L17: Design Review and 6-Function MPI. L17: DRs and MPI 2 CS6235 Administrative Organick Lecture: TONIGHT -David Shaw, “Watching Proteins Dance:
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 The University of Southern Mississippi April 8, 2010.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 MCSR Unix Camp.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
1 The Message-Passing Model l A process is (traditionally) a program counter and address space. l Processes may have multiple threads (program counters.
Distributed-Memory (Message-Passing) Paradigm FDI 2004 Track M Day 2 – Morning Session #1 C. J. Ribbens.
PP Lab MPI programming II. Program#1 Write a program that prints hello from every created process. Like: Hello World from process 0 of 5 Hello World from.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
CS4230 CS4230 Parallel Programming Lecture 13: Introduction to Message Passing Mary Hall October 23, /23/2012.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Message Passing Interface (MPI) 2 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE MPI 2 Part II NPACI Parallel Computing Institute.
Lecture 5 CSS314 Parallel Computing Book: “An Introduction to Parallel Programming” by Peter Pacheco
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Hands on training session for core skills
GRID COMPUTING.
MPI Basics.
Send and Receive.
CS 584.
Send and Receive.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
Introduction to Message Passing Interface (MPI)
Lecture 14: Inter-process Communication
MPI MPI = Message Passing Interface
Introduction to Parallel Computing with MPI
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Quick Tutorial on MPICH for NIC-Cluster
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Distributed Memory Programming with Message-Passing
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Some codes for analysis and preparation for programming
Presentation transcript:

Tutorial on MPI Experimental Environment for ECE5610/CSC6220 1

Outline The WSU Grid cluster How to login to the Grid How to run your program on a single node How to run your program on multiple nodes 2

WSU Grid Cluster The WSU Grid Cluster is a high performance computing system that hosts and manages research related projects. The Grid currently has the combined processing power of 4,568 cores: 1,346 Intel cores, 3,222 AMD cores, with over 13.5TB of RAM and 1.2PB of disk space. The system is open to every researcher at WSU.

Login to the Grid Host name: grid.wayne.edu, Port: 22 4 Download putty.exe:

Login to the Grid Use putty.exe to login Username: ab1234 (your AccessID) Password: your pipeline password 5

Login to the Grid You can start writing an MPI program now! 6

Start MPI Programming MPI environment  Initialize and finalize  Know who I am and my community Writing MPI programs  Similar to writing a C program  Call MPI functions Compiling and running MPI programs  Compiler: mpicc  Execution: mpiexec Example: copy hello.c to your home directory 7

Initialize and Finalize the Environment Initializing the MPI environment before calling any MPI functions int MPI_Init (int *argc, char *argv) Finalizing the MPI environment before terminating your program int MPI_Finalize () The two functions should be called by all processes, and no other MPI calls are allowed before MPI_Init and after MPI_Finalize. 8

Finding out about the Environment Two important questions that arise early in a parallel program are:  How many processes are participating in this computation?  Who am I? MPI provides functions to answer these questions  MPI_Comm_size reports the number of processes.  MPI_Comm_rank reports the rank, a number between 0 and size-1, identifying the calling process. 9

First Program hello.c: “Hello World!” 10 #include "mpi.h" #include int main(int argc, char *argv[]) { int n, myid, numprocs,i,namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name,&namelen); printf("hello world, I am Process %d of %d is on %s\n",myid,numprocs, processor_name); MPI_Finalize(); return 0; }

Compile and run your program Questions:  Why is the rank order random?  Can we serialize the rank order? 11

MPI Basic (Blocking) Send MPI_Send(start, count, datatype, dest, tag, comm) The message buffer is described by (start, count, datatype). The target process is specified by dest, which is the rank of the target process in the communicator specified by comm. When this function returns, the data has been delivered to the system and the buffer can be reused. The message may not have been received by the target process. 12

MPI Basic (Blocking) Receive MPI_RECV(start, count, datatype, source, tag, comm, status) Waits until a matching (on source and tag) message is received from the system, and the buffer can be used. Source is rank in communicator specified by comm, or MPI_ANY_SOURCE. 13

Processes Execution in Order I. Process i sends a message to process i+1; II. After receiving the message, process i+1 sends its message; 14

The Program hello_order.c #include "mpi.h" #include int main(int argc, char *argv[]) { int n, myid, numprocs,i,namelen; char processor_name[MPI_MAX_PROCESSOR_NAME]; char message[100]; MPI_Status status; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); MPI_Get_processor_name(processor_name,&namelen); if(0 == myid){ printf("hello world, I am Process %d of %d is on %s\n",myid,numprocs, processor_name); strcpy(message, "next"); MPI_Send(message, strlen(message)+1, MPI_CHAR, myid+1, 99, MPI_COMM_WORLD); }else if(myid < (numprocs-1)){ MPI_Recv(message, 100, MPI_CHAR, myid-1, 99, MPI_COMM_WORLD, &status); printf("hello world, I am Process %d of %d is on %s\n",myid,numprocs, processor_name); MPI_Send(message, strlen(message)+1, MPI_CHAR, myid+1, 99, MPI_COMM_WORLD); }else{ MPI_Recv(message, 100, MPI_CHAR, myid-1, 99, MPI_COMM_WORLD, &status); printf("hello world, I am ProcesSs %d of %d is on %s\n",myid,numprocs, processor_name); } MPI_Finalize(); return 0; } 15

Results 16 Discuss: Parallel programs use message communication to achieve determinism.

Run programs on multiple nodes #!/bin/bash #PBS -l ncpus=4 #PBS -l nodes=2:ppn=2 #PBS -m ea #PBS -q mtxq #PBS -o grid.wayne.edu:~fb4032/tmp3/output_file.64 #PBS -e grid.wayne.edu:~fb4032/tmp3/error_file.64 /wsu/arch/x86_64/mpich/mpich icc/bin/mpiexec \ -machinefile $PBS_NODEFILE \ -n 8 \ /wsu/home/fb/fb40/fb4032/main Edit the job running script: job.sh This job requests 2 nodes with 2 processors each, and it is submitted to queue mtxq 17

Run programs on multiple nodes -l specify the resources_list, ncpus - Number of CPUs nodes - Number of Nodes -o specify the location of output file -e specify the location of the error file 18

Execution on Multiple nodes Make sure you change the permissions of the job.sh before your submit it. 19

Execution on Multiple Nodes Use “qsub job.sh” to submit the job Use “qme” to check the status of the job Use “qdel vpbs1” to delete the job if necessary. ( vpbs1 is the job ID). 20

Execution on Multiple Servers The output will be copied to the location specified in job.sh. It is in ~/tmp3/output_file.64 in this case. 21

Useful Links Grid tutorial: ials/index.html Job scheduling on grid: ials/pbs.html Step by step to run jobs on Grid: ials/jobs/index.html 22