Using Paraguin to Create Parallel Programs

Slides:



Advertisements
Similar presentations
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Job Submission.
Advertisements

Running DiFX with SGE/OGE Helge Rottmann Max-Planck-Institut für Radioastronomie Bonn, Germany DiFX Meeting Sydney.
Chapter 3. MPI MPI = Message Passing Interface Specification of message passing libraries for developers and users –Not a library by itself, but specifies.
Parallel ISDS Chris Hans 29 November 2004.
Job Submission Using PBSPro and Globus Job Commands.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Tutorial on MPI Experimental Environment for ECE5610/CSC
Introduction to HPC Workshop October Introduction Rob Lane HPC Support Research Computing Services CUIT.
High Performance Computing
DCC/FCUP Grid Computing 1 Resource Management Systems.
Processes CSCI 444/544 Operating Systems Fall 2008.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
Parallelization and Grid Computing Thilo Kielmann Bioinformatics Data Analysis and Tools June 8th, 2006.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Introduction to Parallel Computing Presented By The UTPA Division of Information Technology IT Support Department Office of Faculty and Research Support.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
JGI/NERSC New Hardware Training Kirsten Fagnan, Seung-Jin Sul January 10, 2013.
Introduction to UNIX/Linux Exercises Dan Stanzione.
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Sun Grid Engine. Grids Grids are collections of resources made available to customers. Compute grids make cycles available to customers from an access.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Using The Cluster. What We’ll Be Doing Add users Run Linpack Compile code Compute Node Management.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
Rochester Institute of Technology Job Submission Andrew Pangborn & Myles Maxfield 10/19/2015Service Oriented Cyberinfrastructure Lab,
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
1 " Teaching Parallel Design Patterns to Undergraduates in Computer Science” Panel member SIGCSE The 45 th ACM Technical Symposium on Computer Science.
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
1 HPCI Presentation Kulathep Charoenpornwattana. March 12, Outline Parallel programming with MPI Running MPI applications on Azul & Itanium Running.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Submitting Jobs to the Sun Grid Engine at Sheffield and Leeds (Node1)
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Introduction to HPC Workshop March 1 st, Introduction George Garrett & The HPC Support Team Research Computing Services CUIT.
Parallel Programming C. Ferner & B. Wilkinson, 2014 Introduction to Message Passing Interface (MPI) Introduction 9/4/
Grid Computing: An Overview and Tutorial Kenny Daily BIT Presentation 22/09/2016.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
1 High-Performance Grid Computing and Research Networking Presented by Javier Delgodo Slides prepared by David Villegas Instructor: S. Masoud Sadjadi
Using IBB Cluser: “Celler”.
GRID COMPUTING.
Auburn University
Welcome to Indiana University Clusters
PARADOX Cluster job management
Unix Scripts and PBS on BioU
OpenPBS – Distributed Workload Management System
MPI Basics.
Welcome to Indiana University Clusters
Chapter 3: Processes.
BIMSB Bioinformatics Coordination
Assignment Preliminaries
NGS computation services: APIs and Parallel Jobs
Paul Sexton CS 566 February 6, 2006
Introduction to HPC Workshop
Using compiler-directed approach to create MPI code automatically
Compiling and Job Submission
Introduction to Message Passing Interface (MPI)
Requesting Resources on an HPC Facility
Paraguin Compiler Communication.
Sun Grid Engine.
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Chapter 3: Process Management
Presentation transcript:

Using Paraguin to Create Parallel Programs Assignment 2 Using Paraguin to Create Parallel Programs

Cluster at UNCW Submit Host: babbage Head Node: harpua User Computers Dedicated Cluster Ethernet interface Master node Submit Host: babbage Switch Head Node: harpua Compute nodes Compute Nodes: compute-0-0, compute-0-1, compute-0-2, … 9/4/2012

Cluster at UNCW We use the Sun Grid Engine (SGE) to schedule jobs on the cluster This is to allow users to have exclusive use of the compute nodes so that users’ applications don’t interfere with the performance of others The scheduler (SGE) is responsible for allocating compute nodes to jobs exclusively Compile as normal: $ mpicc hello.c –o hello 9/4/2012

SGE But running is done through a job submission file Some SGE commands: qsub <job submission file> – submits a job to the schedule to run qstat – see the status of submitted jobs (waiting, queued, running, terminated, etc.) qdel <#> - deletes a job (by number) from the system qhost – see a list of hosts 9/4/2012

SGE Example job submission file (hello.sge): #!/bin/sh # Usage: qsub hello.sge #$ -S /bin/sh #$ -pe orte 16 # Specify how many processors we want # -- our name --- #$ -N Hello # Name for the job #$ -l h_rt=00:01:00 # Request 1 minute to execute #$ -cwd # Make sure that the .e and .o file arrive in the working directory #$ -j y # Merge the standard out and standard error to one file mpirun -np $NSLOTS ./hello 9/4/2012

SGE Example job submission file (hello.sge): #!/bin/sh # Usage: qsub hello.sge #$ -S /bin/sh #$ -pe orte 16 # Specify how many processors we want 9/4/2012

SGE Example job submission file (hello.sge): # -- our name --- #$ -N Hello # Name for the job #$ -l h_rt=00:01:00 # Request 1 minute to execute The name of the job plus the name of the output files: Hello.o### and Hello.op### Indicates that the job will need only a minute. This is important so that SGE will clean up if the program hangs or terminates incorrectly. May need to increase the time for longer programs or it will terminate the program before it has completed. 9/4/2012

SGE Example job submission file (hello.sge): #$ -cwd # Make sure that the .e and .o file arrive in the working directory #$ -j y # Merge the standard out and standard error to one file Do the job in the current directory SGE will create 3 files: Hello.o##, Hello.e##, and Hello.op##. The –j y command will merge the Hello.o and Hello.e files (std out and error). 9/4/2012

SGE Example job submission file (hello.sge): mpirun -np $NSLOTS ./hello And finally the command to run the MPI program. $NSLOTS is the same number given with the #$ -pe orte 16 line. 9/4/2012

SGE Example $ qstat $ qsub hello.sge Your job 106 ("Hello") has been submitted job-ID prior name user state submit/start at queue slots ja-task-ID ----------------------------------------------------------------------------------------------------------------- 106 0.00000 Hello cferner qw 09/04/2012 09:08:38 16 $ The state of “qw” means queued and waiting. 9/4/2012

SGE Example $ qstat job-ID prior name user state submit/start at queue slots ja-task-ID ----------------------------------------------------------------------------------------------------------------- 106 0.55500 Hello cferner r 09/04/2012 09:11:43 all.q@compute-0-0.local 16 [cferner@babbage mpi_assign]$ The state of “r” means running 9/4/2012

SGE Example $ ls hello hello.c Hello.o106 Hello.po106 hello.sge ring ring.c ring.sge test test.c test.sge $ cat Hello.o106 Hello world from master process 0 running on compute-0-2.local Message from process = 1 : Hello world from process 1 running on compute-0-2.local Message from process = 2 : Hello world from process 2 running on compute-0-2.local … You will want to clean up the output files when you are done with them or you will end up with a bunch of clutter. 9/4/2012

Deleting a job $ qstat job-ID prior name user state submit/start at queue slots ja-task-ID ----------------------------------------------------------------------------------------------------------------- 108 0.00000 Hello cferner qw 09/04/2012 09:18:20 16 $ qdel 108 cferner has registered the job 108 for deletion $ 9/4/2012

Assignment 2 Setup (Do this only once) Put these lines in the file .bash_profile export MACHINE=x86_64-redhat-linux export SUIFHOME=/share/apps/suifhome export COMPILER_NAME=gcc `perl $SUIFHOME/setup_suif -sh` Run the command: $ . .bash_profile Notice the 2 periods and the space between them

Hello World Program Program is given to you You simply need to compile it and run it (using a job submission file) Try running it on my processors Produce documentation of compiling and running the program

Matrix Multiplication Matrix Multiplication skeleton program is given to you in Appendix Includes: Opening the input file Reading the input Taking a time stamp Taking a 2nd time stamp Computing the elapsed time between the time stamps Printing the results

Matrix Multiplication You need to: Broadcast the error to the processors and exit in necessary Scatter the input Compute the partial results Gather the partial results

Heat Distribution Using the stencil pattern, model the distribution of heat in a room that has a fireplace along one wall

Heat Distribution The newly computed values will be the average of its neighbors (diagonals also) as well as its own old value So each value at location i,j should be the average of 9 values This reduces oscillations

Producing a Visual of the Output Produced with X11 Graphics Produced with Excel

Producing a Visual of the Output See the document http://coitweb.uncc.edu/~abw/ITCS4145F13/As signments/X11GraphicsNotes.pdf for help with creating graphics using X11. The Excel Graph is a surface plot

Monte Carlo Estimation of π (required for Graduates/optional for Undergraduates) Scatter/Gather pattern, but uses broadcast and reduce This is not a workflow pattern π can also be estimated by integrating the function , but you aren’t asked to do this.

Questions?