Introduction to Parallel Programming at MCSR Presentation at Delta State University January 17, 2007 Jason Hale.

Slides:



Advertisements
Similar presentations
CS 140: Models of parallel programming: Distributed memory and MPI.
Advertisements

Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Introduction to Parallel Programming at MCSR. Mission Enhance Computational Research Climate at Mississippi’s 8 Public Universities also: Support High.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve?
High Performance Computing
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Point-to-Point Communication Self Test with solution.
CS 240A: Models of parallel programming: Distributed memory and MPI.
Message-Passing Programming and MPI CS 524 – High-Performance Computing.
Distributed Memory Programming with MPI. What is MPI? Message Passing Interface (MPI) is an industry standard message passing system designed to be both.
EECC756 - Shaaban #1 lec # 7 Spring Message Passing Interface (MPI) MPI, the Message Passing Interface, is a library, and a software standard.
MPI Point-to-Point Communication CS 524 – High-Performance Computing.
Basics of Message-passing Mechanics of message-passing –A means of creating separate processes on different computers –A way to send and receive messages.
Parallel Programming Using Basic MPI Presented by Timothy H. Kaiser, Ph.D. San Diego Supercomputer Center Presented by Timothy H. Kaiser, Ph.D. San Diego.
A Message Passing Standard for MPP and Workstations Communications of the ACM, July 1996 J.J. Dongarra, S.W. Otto, M. Snir, and D.W. Walker.
Parallel & Cluster Computing MPI Basics Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy, Contra Costa College.
1 MPI: Message-Passing Interface Chapter 2. 2 MPI - (Message Passing Interface) Message passing library standard (MPI) is developed by group of academics.
Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.
Part I MPI from scratch. Part I By: Camilo A. SilvaBIOinformatics Summer 2008 PIRE :: REU :: Cyberbridges.
CS 240A Models of parallel programming: Distributed memory and MPI.
MPI and High Performance Computing: Systems and Programming Barry Britt, Systems Administrator Department of Computer Science Iowa State University.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 The University of Southern Mississippi April 8, 2010.
Introduction to Parallel Programming with C and MPI at MCSR Part 1 MCSR Unix Camp.
Parallel Computing A task is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information What do we.
Parallel Programming with MPI Prof. Sivarama Dandamudi School of Computer Science Carleton University.
Message Passing Programming with MPI Introduction to MPI Basic MPI functions Most of the MPI materials are obtained from William Gropp and Rusty Lusk’s.
CS 838: Pervasive Parallelism Introduction to MPI Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from an online tutorial.
MPI Communications Point to Point Collective Communication Data Packaging.
Message Passing Programming Model AMANO, Hideharu Textbook pp. 140-147.
Summary of MPI commands Luis Basurto. Large scale systems Shared Memory systems – Memory is shared among processors Distributed memory systems – Each.
MPI Introduction to MPI Commands. Basics – Send and Receive MPI is a message passing environment. The processors’ method of sharing information is NOT.
An Introduction to Parallel Programming with MPI March 22, 24, 29, David Adams
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Parallel Programming with MPI By, Santosh K Jena..
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd Edition, by B. Wilkinson & M. Allen, ©
CSCI-455/522 Introduction to High Performance Computing Lecture 4.
Message Passing and MPI Laxmikant Kale CS Message Passing Program consists of independent processes, –Each running in its own address space –Processors.
Message-Passing Computing Chapter 2. Programming Multicomputer Design special parallel programming language –Occam Extend existing language to handle.
What is MCSR? Who is MCSR? What Does MCSR Do? Who Does MCSR Serve? What Kinds of Accounts? Why Does Mississippi Need Supercomputers? What Kinds of Research?
Introduction to MPI Nischint Rajmohan 5 November 2007.
MPI Point to Point Communication CDP 1. Message Passing Definitions Application buffer Holds the data for send or receive Handled by the user System buffer.
Programming distributed memory systems: Message Passing Interface (MPI) Distributed memory systems: multiple processing units working on one task (e.g.
12.1 Parallel Programming Types of Parallel Computers Two principal types: 1.Single computer containing multiple processors - main memory is shared,
An Introduction to MPI (message passing interface)
Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library.
3/12/2013Computer Engg, IIT(BHU)1 MPI-1. MESSAGE PASSING INTERFACE A message passing library specification Extended message-passing model Not a language.
1 Parallel and Distributed Processing Lecture 5: Message-Passing Computing Chapter 2, Wilkinson & Allen, “Parallel Programming”, 2 nd Ed.
Message Passing Interface Using resources from
MPI-Message Passing Interface. What is MPI?  MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a.
An Introduction to Parallel Programming with MPI February 17, 19, 24, David Adams
1 Programming distributed memory systems Clusters Distributed computers ITCS 4/5145 Parallel Computing, UNC-Charlotte, B. Wilkinson, Jan 6, 2015.
MPI: Message Passing Interface An Introduction S. Lakshmivarahan School of Computer Science.
PVM and MPI.
Introduction to parallel computing concepts and technics
MPI Basics.
CS4402 – Parallel Computing
Introduction to MPI.
MPI Message Passing Interface
CS 584.
CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Mary Hall November 3, /03/2011 CS4961.
MPI-Message Passing Interface
Lecture 14: Inter-process Communication
A Message Passing Standard for MPP and Workstations
Message-Passing Computing More MPI routines: Collective routines Synchronous routines Non-blocking routines ITCS 4/5145 Parallel Computing, UNC-Charlotte,
Introduction to parallelism and the Message Passing Interface
Hardware Environment VIA cluster - 8 nodes Blade Server – 5 nodes
Hello, world in MPI #include <stdio.h> #include "mpi.h"
Hello, world in MPI #include <stdio.h> #include "mpi.h"
MPI Message Passing Interface
CS 584 Lecture 8 Assignment?.
Presentation transcript:

Introduction to Parallel Programming at MCSR Presentation at Delta State University January 17, 2007 Jason Hale

What is MCSR’s Mission? Mississippi Center for Supercomputer Research Established in 1987 by the Mississippi Legislature Mission: Enhance Computational Research Climate at Mississippi’s 8 Public Universities also: Support High Performance Computing (HPC) Education in Mississippi

How Does MCSR Support Research? Research Accounts on MCSR Supercomputers Available to all researcher at MS universities No cost to the researcher or the institution 800+ Research Accounts Active in 2006 Services Consulting Seminars Software Installation, Compiling, and Troubleshooting on MCSR systems

What Research at MCSR? MCSR research users reported a total of over $38,000,000 in Active Research Funds (FY 2006) Currently Active Research Areas: Computational Chemistry Civil Engineering Operations Research Fluid Dynamics ….

Education at MCSR Over 64 Courses Supported since Alcorn State University -Delta State University -The University of Southern Mississippi -Mississippi Valley State University -The University of Mississippi C/C++, Fortran, MPI, OpenMP, MySQL, HTML, Javascript, Matlab, PHP, Perl, ….

Gaussian 03, GAMESS, Amber, MPQC, NWChem chemistry packages PBS Professional 7.0 (for batch scheduling) Fortran, C, C++ (Intel, PGI, GNU) Abaqus, Patran (Engineering) What software do you need? Software at MCSR

What Research at MCSR?

What is a Supercomputer? Loosely speaking, it is a “large” computer with an architecture that has been optimized for bigger solving problems faster than a conventional desktop, mainframe, or server computer. - Pipelining - Parallelism (lots of CPUs or Computers)

Supercomputers at MCSR: redwood CPU SGI Altix 3700 Supercomputer GB of shared memory

Supercomputers at MCSR: mimosa -253 CPU Intel Linux Cluster – Pentium 4 -Distributed memory – 500MB – 1GB per node -Gigabit Ethernet

Supercomputers at MCSR: sweetgum - SGI Origin CPU Supercomputer - 64 GB of shared memory

What is Parallel Computing? Using more than one computer (or processor) to complete a computational problem Examples of Parallelism in Every Day Life?

How May a Problem be Parallelized? Data Decomposition Examples? Task Decomposition Examples?

Introduction to Parallel Programming at MCSR Message Passing Computing –Processes coordinate and communicate results via calls to message passing library routines –Programmers “parallelize” algorithm and add message calls –At MCSR, this is via MPI programming with C or Fortran Sweetgum – Origin 2800 Supercomputer (128 CPUs) Mimosa – Beowulf Cluster with 253 Nodes Redwood – Altix 3700 Supercomputer (224 CPUs) Shared Memory Computing –Processes or threads coordinate and communicate results via shared memory variables –Care must be taken not to modify the wrong memory areas –At MCSR, this is via OpenMP programming with C or Fortran on sweetgum

Message Passing Computing at MCSR Process Creation Slave and Master Processes Static vs. Dynamic Work Allocation Compilation Models Basics Synchronous Message Passing Collective Message Passing Deadlocks Examples

Message Passing Process Creation Dynamic –one process spawns other processes & gives them work –PVM –More flexible –More overhead - process creation and cleanup Static –Total number of processes determined before execution begins –MPI

Message Passing Processes Often, one process will be the master, and the remaining processes will be the slaves Each process has a unique rank/identifier Each process runs in a separate memory space and has its own copy of variables

Message Passing Work Allocation Master Process –Does initial sequential processing –Initially distributes work among the slaves Statically or Dynamically –Collects the intermediate results from slaves –Combines into the final solution Slave Process –Receives work from, and returns results to, the master –May distribute work amongst themselves (decentralized load balancing)

Message Passing Compilation Compile/link programs w/ message passing libraries using regular (sequential) compilers Fortran MPI example: include mpif.h C MPI example: #include “mpi.h” See p?pagename=mpi.inc p?pagename=mpi.inc for exact MCSR MPI directory locations

Message Passing Models SPMD – Shared Program/Multiple Data –Single version of the source code used for each process –Master executes one portion of the program; slaves execute another; some portions executed by both –Requires one compilation per architecture type –MPI MPMP – Multiple Program/Multiple Data –Once source code for master; another for slave –Each must be compiled separately –PVM

Message Passing Basics Each process must first establish the message passing environment Fortran MPI example: integer ierror call MPI_INIT (ierror) C MPI example: int ierror; ierror = MPI_Init(&argc, &argv);

Message Passing Basics Each process has a rank, or id number –0, 1, 2, … n-1, where there are n processes With SPMD, each process must determine its own rank by calling a library routine Fortran MPI Example: integer comm, rank, ierror call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierror) C MPI Example ierror = MPI_Comm_rank(MPI_COMM_WORLD, &rank);

Message Passing Basics Each process has a rank, or id number –0, 1, 2, … n-1, where there are n processes Each process may use a library call to determine how many total processes it has to play with Fortran MPI Example: integer comm, size, ierror call MPI_COMM_SIZE(MPI_COMM_WORLD, size, ierror) C MPI Example ierror = MPI_Comm_rank(MPI_COMM_WORLD, &size);

Message Passing Basics Each process has a rank, or id number –0, 1, 2, … n-1, where there are n processes Once a process knows the size, it also knows the ranks (id #’s) of those other processes, and can send or receive a message to/from any other process. Fortran MPI Example: call MPI_SEND(buf, count, datatype, dest, tag, comm, ierror) DATA EVELOPE--- -status call MPI_RECV(buf, count, datatype, sourc,tag,comm, status,ierror)

MPI Send and Receive Arguments Buf starting location of data Count number of elements Datatype MPI_Integer, MPI_Real, MPI_Character… Destination rank of process to whom msg being sent Source rank of sender from whom msg being received or MPI_ANY_SOURCE Tag integer chosen by program to indicate type of message or MPI_ANY_TAG Communicator id’s the process team, e.g., MPI_COMM_WORLD Status the result of the call (such as the # data items received)

Synchronous Message Passing Message calls may be blocking or nonblocking Blocking Send –Waits to return until the message has been received by the destination process –This synchronizes the sender with the receiver Nonblocking Send –Return is immediate, without regard for whether the message has been transferred to the receiver –DANGER: Sender must not change the variable containing the old message before the transfer is done. –MPI_ISend() is nonblocking

Synchronous Message Passing Locally Blocking Send –The message is copied from the send parameter variable to intermediate buffer in the calling process –Returns as soon as the local copy is complete –Does not wait for receiver to transfer the message from the buffer –Does not synchronize –The sender’s message variable may safely be reused immediately –MPI_Send() is locally blocking

Synchronous Message Passing Blocking Receive –The call waits until a message matching the given tag has been received from the specified source process. –MPI_RECV() is blocking. Nonblocking Receive –If this process has a qualifying message waiting, retrieves that message and returns –If no messages have been received yet, returns anyway –Used if the receiver has other work it can be doing while it waits –Status tells the receive whether the message was received –MPI_Irecv() is nonblocking –MPI_Wait() and MPI_Test() can be used to periodically check to see if the message is ready, and finally wait for it, if desired

Collective Message Passing Broadcast –Sends a message from one to all processes in the group Scatter –Distributes each element of a data array to a different process for computation Gather –The reverse of scatter…retrieves data elements into an array from multiple processes

Collective Message Passing w/MPI MPI_Bcast() Broadcast from root to all other processes MPI_Gather() Gather values for group of processes MPI_Scatter() Scatters buffer in parts to group of processes MPI_Alltoall() Sends data from all processes to all processes MPI_Reduce() Combine values on all processes to single val MPI_Reduce_Scatter() Broadcast from root to all other processes MPI_Bcast() Broadcast from root to all other processes

Message Passing Deadlock Deadlock can occur when all critical processes are waiting for messages that never come, or waiting for buffers to clear out so that their own messages can be sent Possible Causes –Program/algorithm errors –Message and buffer sizes Solutions –Order operations more carefully –Use nonblocking operations –Add debugging output statements to your code to find the problem

Portable Batch System in SGI Sweetgum: –PBS Professional 7.0 is installed on sweetgum.

Portable Batch System in Linux Mimosa PBS Configuration: –PBS Professional 7.1 is installed on mimosa

Sample Portable Batch System Script Sample mimosa% vi example.pbs #!/bin/bash #PBS -l nodes=4 (MIMOSA) #PBS –l ncpus=4 (SWEETGUM) #PBS -q MCSR-4N #PBS –N example export PGI=/usr/local/apps/pgi-6.1 export PATH=$PGI/linux86/6.1/bin:$PATH cd $PWD rm *.pbs.[eo]* pgcc –o add_mpi.exe add_mpi.c –lmpich mpirun -np 4 add_mpi.exe mimosa % qsub example.pbs mimosa.mcsr.olemiss.edu

Sample Portable Batch System Script Sample Mimosa% qstat Job id Name User Time Use S Queue mimosa 4_3.pbs r :05:17 R MCSR-2N mimosa 2_4.pbs r :00:58 R MCSR-2N mimosa GC8w.pbs lgorb 01:03:25 R MCSR-2N mimosa 3_6.pbs r :01:54 R MCSR-2N mimosa GCr8w.pbs lgorb 00:59:19 R MCSR-2N mimosa ATr7w.pbs lgorb 00:55:29 R MCSR-2N mimosa example tpirim 0 Q MCSR-16N mimosa try1 cs :00:00 R MCSR-CA –Further information about using PBS at MCSR: ame=pbs_1.inc&menu=vMBPBS.inc ame=pbs_1.inc&menu=vMBPBS.inc

For More Information Hello World MPI Examples on Sweetgum (/usr/local/appl/mpihello) and Mimosa (/usr/local/apps/ppro/mpiworkshop): Websites MPI at MCSR: PBS at MCSR: Mimosa Cluster: MCSR Accounts:

MPI Programming Exercises Hello World sequential parallel (w/MPI and PBS) Add and Array of numbers sequential parallel (w/MPI and PBS)