Compiling and Job Submission

Slides:



Advertisements
Similar presentations
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Advertisements

Job Submission Using PBSPro and Globus Job Commands.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
ISG We build general capability Job Submission on the Olympus Cluster J. DePasse; S. Brown, PhD; T. Maiden Pittsburgh Supercomputing Center Public Health.
High Performance Computing
April 24, 2002 TotalView Debugger. April 24, 2002 Compile and Run Copy bug1.f from /tmp/training to your /tmp/username directory Compile with the –g option.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
Understanding the Basics of Computational Informatics Summer School, Hungary, Szeged Methos L. Müller.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
Introduction to UNIX/Linux Exercises Dan Stanzione.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Using The Cluster. What We’ll Be Doing Add users Run Linpack Compile code Compute Node Management.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Introduction to Parallel Programming with C and MPI at MCSR Part 2 Broadcast/Reduce.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Network Queuing System (NQS). Controls batch queues Only on Cray SV1 Presently 8 queues available for general use and one queue for the Cray analyst.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
N ATIONAL E NERGY R ESEARCH S CIENTIFIC C OMPUTING C ENTER INTRODUCTION TO THE T3E SYSTEM1 Introduction to the T3E Mark Durst NERSC/USG ERSUG Training,
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
How to for compiling and running MPI Programs. Prepared by Kiriti Venkat.
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
Threaded Programming Lecture 2: Introduction to OpenMP.
Introduction to HPC Workshop October Introduction Rob Lane & The HPC Support Team Research Computing Services CUIT.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
April 24, 2002 Parallel Port Example. April 24, 2002 Introduction The objective of this lecture is to go over a simple problem that illustrates the use.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
© 2011 Pittsburgh Supercomputing Center XSEDE 2012 Intro To Our Environment John Urbanic Pittsburgh Supercomputing Center July, 2012.
Grid Computing: An Overview and Tutorial Kenny Daily BIT Presentation 22/09/2016.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
1 High-Performance Grid Computing and Research Networking Presented by Javier Delgodo Slides prepared by David Villegas Instructor: S. Masoud Sadjadi
1 CS 192 Lecture 4 Winter 2003 December 8-9, 2003 Dr. Shafay Shamail.
Advanced Computing Facility Introduction
Hands on training session for core skills
GRID COMPUTING.
Specialized Computing Cluster An Introduction
UNIX To do work for the class, you will be using the Unix operating system. Once connected to the system, you will be presented with a login screen. Once.
Welcome to Indiana University Clusters
PARADOX Cluster job management
Unix Scripts and PBS on BioU
HPC usage and software packages
OpenPBS – Distributed Workload Management System
Welcome to Indiana University Clusters
How to use the HPCC to do stuff
Using Paraguin to Create Parallel Programs
BIOSTAT LINUX CLUSTER By Helen Wang October 29, 2015.
Chapter 2: System Structures
Practice #0: Introduction
INTRODUCTION TO UNIX: The Shell Command Interface
Writing Shell Scripts ─ part 3
Paul Sexton CS 566 February 6, 2006
CMU Access via Launch Cluster Management Utility GUI.
Bruce Pullig Solution Architect
Introduction to TAMNUN server and basics of PBS usage
Requesting Resources on an HPC Facility
Getting Started: Developing Code with Cloud9
Good Testing Practices
Introduction to High Performance Computing Using Sapelo2 at GACRC
CST8177 Scripting 2: What?.
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

Compiling and Job Submission Turning your source code into an executable code, then running it in batch mode. April 23, 2002

C compiler -g option for debugging -X option to hardcode # of Pes -l to link with a library -O[0-3] for optimization April 23, 2002

Fortran compiler -g option for debugging -X option to hardcode # of Pes -l to link with a library -O[0-3] for optimization April 23, 2002

MPI library Link with –lmpi This is automatically done for your on jaromir, but you must remember to link if you are using mpi on most other systems April 23, 2002

Running your program To run your program in parallel you need to issue the mpprun command Indicate the number of processors with –nX Example mpprun –n4 a.out April 23, 2002

Interactive Interactive Mode Used for compiling and debugging Should not be used for production runs Do not run multiple interactive jobs at the same time Limit of 10 CPU minutes in interactive mode April 23, 2002

Batch Batch Mode Create a script Submit to the queueing system Available 24 hours Should be used for production runs April 23, 2002

Sample batch file #QSUB –l mpp_t=3600 #QSUB –l mpp_p=2 #QSUB –o t3e.output –eo set echo ja cd $TMP cp ~/prog prog cp ~/data data mpprun –n2 prog > results far store results results ja -csthlMe April 23, 2002

Submit the job While logged into jaromir, use the qsub command qsub jobfile April 23, 2002

Monitor the job The qstat command displays the status of the job qstat –a Will show information about your job April 23, 2002

Qstat output 88059.jaromir.psc.edu test.job username qm_12h_128@jaromir 34455 24 690 7113 R05 April 23, 2002

Delete a job The qdel command will delete a job, use –k if the job is running qdel jobid qdel –k jobid April 23, 2002

Output and Error files Upon completion of your batch job, you should receive an output and an error file(unless you combined them with the –eo option) April 23, 2002

Typical Errors The current csh(23395) has received signal 26 (cpu limit exceeded) Ask for more time in your batch job Warning: no access to tty; thus no job control in this shell Simply indicating that it is a batch request, ignore this message April 23, 2002

Exercises Login to jaromir and cd to your staging directory(you may need to create this) mkdir /tmp/username cd /tmp/username April 23, 2002

Exercises Cont. Copy exer.f from /tmp/training to your staging directory cp /tmp/training/exer.f . Compile f90 exer.f –o exer Run interactively, enter in 3 integers mpprun –n4 exer April 23, 2002

Fortran Sample Code exer.f Compile, link with the mpi library. Run on 2 – 8 processors. Enter 3 integers, the first being the size of the problem, the second being the number of iterations and the third being the number of processors used. Outputs the time and flops. April 23, 2002

Exercises Cont. Copy shuf.c from /tmp/training to your staging directory cp /tmp/training/shuf.c . Compile cc shuf.c –o shuf Run interactively on 4 processors mpprun –n4 shuf April 23, 2002

C Sample Code shuf.c Compile, link with the mpi library. Run on 2-8 processors. Passes numbers via mpi. April 23, 2002

Exercises – Job Submission Create a job that will Request 50 seconds of execution time and 2 Pes Change directory to $TMP Copy the shuf executable from your /tmp/username directory to $TMP Run shuf Redirect the output to a file called output.shuf Copy output.shuf to /tmp/username April 23, 2002

Exercises – Job Submission 2 Submit the job Check the status Check the error and output files Store output.shuf to far April 23, 2002