High Performance Computing

Slides:



Advertisements
Similar presentations
MPI Basics Introduction to Parallel Programming and Cluster Computing University of Washington/Idaho State University MPI Basics Charlie Peck Earlham College.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Reference: / MPI Program Structure.
Tutorial on MPI Experimental Environment for ECE5610/CSC
MPI Program Structure Self Test with solution. Self Test 1.How would you modify "Hello World" so that only even-numbered processors print the greeting.
Introduction to MPI. What is Message Passing Interface (MPI)?  Portable standard for communication  Processes can communicate through messages.  Each.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Message Passing Interface (MPI) Part I NPACI Parallel.
Comp 422: Parallel Programming Lecture 8: Message Passing (MPI)
12b.1 Introduction to Message-passing with MPI UNC-Wilmington, C. Ferner, 2008 Nov 4, 2008.
CS 179: GPU Programming Lecture 20: Cross-system communication.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
ORNL is managed by UT-Battelle for the US Department of Energy Crash Course In Message Passing Interface Adam Simpson NCCS User Assistance.
Electronic Visualization Laboratory, University of Illinois at Chicago MPI on Argo-new Venkatram Vishwanath Electronic Visualization.
Paul Gray, University of Northern Iowa Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October University of Oklahoma.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
IBM Research © 2006 IBM Corporation CDT Static Analysis Features CDT Developer Summit - Ottawa Beth September.
Director of Contra Costa College High Performance Computing Center
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
An Introduction to Parallel Programming and MPICH Nikolaos Hatzopoulos.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
Message Passing Interface (MPI) 1 Amit Majumdar Scientific Computing Applications Group San Diego Supercomputer Center Tim Kaiser (now at Colorado School.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
Parallel Programming with MPI By, Santosh K Jena..
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
1 Message Passing Models CEG 4131 Computer Architecture III Miodrag Bolic.
Oct. 23, 2002Parallel Processing1 Parallel Processing (CS 730) Lecture 6: Message Passing using MPI * Jeremy R. Johnson *Parts of this lecture was derived.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Running on GCB part1 By: Camilo Silva. Simple steps to run MPI 1.Use putty or the terminal 2.SSH to gcb.fiu.edu 3.Loggin by providing your username and.
Introduction to MPI Nischint Rajmohan 5 November 2007.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
An Introduction to MPI (message passing interface)
NORA/Clusters AMANO, Hideharu Textbook pp. 140-147.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Project18 Communication Design + Parallelization Camilo A Silva BIOinformatics Summer 2008.
Chapter 5. Nonblocking Communication MPI_Send, MPI_Recv are blocking operations Will not return until the arguments to the functions can be safely modified.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
Implementing Processes and Threads CS550 Operating Systems.
Message Passing Interface Using resources from
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Hands on training session for core skills
Introduction to parallel computing concepts and technics
MPI Basics.
Computational Physics (Lecture 17)
Introduction to MPI.
MPI Message Passing Interface
CS 584.
Introduction to Message Passing Interface (MPI)
Message Passing Models
Lecture 14: Inter-process Communication
Introduction to parallelism and the Message Passing Interface
MPI MPI = Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Quick Tutorial on MPICH for NIC-Cluster
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
Some codes for analysis and preparation for programming
CS 584 Lecture 8 Assignment?.
Presentation transcript:

High Performance Computing MPI and C-Language Seminars 2010 High Performance Computing

Seminar Plan Week 1 – Introduction, Data Types, Control Flow, Pointers Week 2 – Arrays, Structures, Enums, I/O, Memory Week 3 – Compiler Options and Debugging Week 4 – MPI in C and Using the HPSG Cluster Week 5 – “How to Build a Performance Model” Week 6-9 – Coursework Troubleshooting (Seminar tutors available in their office)

MPI in C

Introduction to MPI MPI – Message passing interface, is an extension C to allow processors to communicate with each other. No need for a shared memory space – All data passes via messages. Every processor can send to every other processor but data must explicitly be received. Processors are kept synchronised by barriers.

MPI Hello World (1/2) The most basic of MPI programs: #include <stdio.h> #include <mpi.h> int main(int argc, char *argv[]) int rank, size; MPI_Init(&argc, &argv); /* starts MPI */ MPI_Comm_rank(MPI_COMM_WORLD, &rank); /* get current process id */ MPI_Comm_size (MPI_COMM_WORLD, &size); /* get processor count*/ printf( "Hello world from process %d of %d\n", rank, size ); MPI_Finalize(); return 0; }

MPI Hello World (2/2) The MPI environment is established via the MPI_Init call. MPI_COMM_WORLD Is the default communicator. Defined as a group of processors MPI_Comm_size Is the number of processors in that communicator, for MPI_COMM_WORLD this represents all the processors. MPI_Comm_rank Is the position of that processor within the communicator provided.

Compiling MPI MPI has multiple different compilers for implementations in different languages we only need the C compiler. mpicc – C based compiler – For us GCC mpiCC / mpicxx / mpic++ – C++ based mpif90 / mpif77 – Fortran based Compiling is done in the same way as C. mpicc –o myprogram helloworld.c

Running MPI Once compiled an MPI program must be run with mpirun. mpirun –np 2 myprogram Where 2 is the number of processors to run on. As there is no synchronisation in the program the order of the print statements is non deterministic. Note: Killing MPI jobs without letting them call MPI_Finalize may result in stray threads.

Environment Variables MPI and GCC are installed remotely and their paths need to be added to your environment variables. The Module package allows you to quickly load and unload working environments. Module is installed on the cluster(Deep Thought) ‘module avail’ – List all available modules. ‘module load gnu/openmpi’ – Loads gcc-4.3 and openmpi. ‘module list’ – Shows currently loaded modules. ‘module unload gnu/openmpi’ – Unloads the module.

Message Passing in MPI

MPI_Send MPI_Send - Basic method of passing data. Each MPI_Send must have a matching MPI_Recv. MPI_Send(message, length, data type, destination, tag, communicator); Message – Actual data in the form of a pointer. Length – Number of elements in the message. Data Type – The MPI Data type of each element in the message. Destination – Rank of the processor to receive the message. Tag – Identifier for when sending multiple messages. Communicator – Processor group (MPI_COMM_WORLD).

MPI_Recv Required for MPI_Send. MPI_Recv(message, length, data type, source, tag, communicator, status); Message – Pointer to memory address to store the data. Length – Number of elements in the message. Data Type – The MPI Data type of each element in the message. Source – Rank of the processor to sending the message. Tag – Identifier for when sending multiple messages. Communicator – Processor group (MPI_COMM_WORLD). Status – A structure to hold the status of the send/recv.

Message Passing Example Processors sending data from process 0 to 1. int size, rank, tag=0; int myarray[3]; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); if(rank == 0){ myarray[0] = 1; myarray[1] = 2; myarray[2] = 3; MPI_Send(myarray, 3, MPI_INT, 1, tag, MPI_COMM_WORLD); }else{ MPI_Recv(myarray, 3, MPI_INT, 0, tag, MPI_COMM_WORLD, &status); } MPI_Finalize();

Process Synchronisation Need to ensure that all processes are at the same point of execution. Implicit and explicit definitions; Barriers or blocking communications. MPI_Barrier(MPI_COMM_WORLD); Waits for all processors before any continue. MPI_Send / MPI_Recv Wait for the other process to finish receiving before continuing.

Non-Blocking Communication MPI_Isend / MPI_Irecv instead of MPI_Send / MPI_Recv. ‘I’ stands for immediately – The calling process returns immediately regardless of the status of the actual operation. MPI_Isend – Allows you to continue processing while the send happens. MPI_Irecv – You must check the data has arrived before using it.

Accessing the Cluster

Deepthought – IBM Cluster 42 nodes. 2 Cores per node (Pentium III – 1.4Ghz). 2GB RAM per node. Myrinet fibre-optics interconnect. ssh hpc06XXXXX@deepthought.dcs.warwick.ac.uk scp ./karman.tar.gz hpc06XXXXX@deepthought.dcs.warwick.ac.uk:/path/ Headnode ( Frankie ) – Not to be used for running jobs. All MPI jobs on Frankie will be killed

PBS (1/3) We use Torque(OpenPBS) and MAUI (Scheduler) . Listing jobs in the queue: fjp@frankie:~$ qstat –a frankie:                                                                 Req'd  Req'd  Elap Job ID  Username Queue Jobname  SessID NDS  TSK Memory Time  S Time ------- ------- -------- ---------------- ------ ----- --- ------ ----- - ----- 27613.frankie  sdh   hpsg  octave   11363   1  --    --  3000: R 68:44 27614.frankie  sdh   hpsg  octave   11434   1  --    --  3000: R 68:41 Status Flags: Q – Queued. R – Running. E – Ending (Staging out of files) – NOT Error!!!! C – Complete.

PBS (2/3) Submit Files: Deleting a job: Submitting a Job: From file: qsub –V –N <name> -l nodes=x:ppn=y submit.pbs An interactive Job: qsub –V –N <name> –l nodes=x:ppn=y -I Submit Files: #!/bin/bash #PBS –V cd $PBS_O_WORKDIR mpirun ./myprog Deleting a job: qdel <jobid>

PBS (3/3) Node information: Standard Output and Error. fjp@frankie:~$ pbsnodes –a vogon0.deepthought.hpsg.dcs.warwick.ac.uk state = job-exclusive np = 2 properties = vogon ntype = cluster jobs = 0/27613.frankie, 1/27614.frankie status = ......... Standard Output and Error. For interactive jobs is as normal. Batch Jobs : Output File - <jobname / submit file name>.o<jobid> Error File - <jobname / submit file name>.e<jobid> File I/O takes place as usual. Concurrent file writes (same file) can be problematic – avoid.

Queues Different queues are specified to have access to different resources with different priorities. Debug queue – High priority low core count(~4) – need to use: qsub -q debug .... Interactive queue – High priority medium core count(~8) - no need to specify a queue. Batch queue – Normal priority high core count(~64).

Warning Shared resource - Don’t leave it until the last minute. The queue can get very busy. Don’t leave interactive jobs running when not in use. Once again – Do not run jobs on Frankie!