Software stack on the worker nodes AMD Opteron/Intel Westmere Redhat 64bit Scientific Linux Portland, GNU,Intel OpenMPI Sun Grid Engine v6 Ganglia.

Slides:



Advertisements
Similar presentations
Chapter 5: CPU Scheduling
Advertisements

CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Parallel R Andrew Jaffe Computing Club 4/5/2015. Overview Introduction multicore Array jobs The rest.
REI – Recipe Execution Infrastructure Jens Knudstrup/ REI Recipe Execution Infrastructure.
Student & Work Study Employment Facts & Time Card Training
Configuration management
Software change management
Lectures on File Management
© 2007 IBM Corporation IBM Global Engineering Solutions IBM Blue Gene/P Job Submission.
Module R2 CS450. Next Week R1 is due next Friday ▫Bring manuals in a binder - make sure to have a cover page with group number, module, and date. You.
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
High Performance Computing
Job Submission on WestGrid Feb on Access Grid.
6/2/20071 Grid Computing Sun Grid Engine (SGE) Manoj Katwal.
Sun Grid Engine Grid Computing Assignment – Fall 2005 James Ruff Senior Department of Mathematics and Computer Science Western Carolina University.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Getting Started with HPC On Iceberg
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
Gilbert Thomas Grid Computing & Sun Grid Engine “Basic Concepts”
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Getting Started with HPC On Iceberg Michael Griffiths Corporate Information and Computing Services The University of Sheffield
DDT Debugging Techniques Carlos Rosales Scaling to Petascale 2010 July 7, 2010.
Introduction to Using SLURM on Discover Chongxun (Doris) Pan September 24, 2013.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 3: Operating-System Structures System Components Operating System Services.
Using the BYU Supercomputers. Resources Basic Usage After your account is activated: – ssh You will be logged in to an interactive.
Research Computing Environment at the University of Alberta Diego Novillo Research Computing Support Group University of Alberta April 1999.
Network Queuing System (NQS). Controls batch queues Only on Cray SV1 Presently 8 queues available for general use and one queue for the Cray analyst.
1 High-Performance Grid Computing and Research Networking Presented by David Villegas Instructor: S. Masoud Sadjadi
1 Lattice QCD Clusters Amitoj Singh Fermi National Accelerator Laboratory.
Getting Started with HPC On Iceberg Michael Griffiths and Deniz Savas Corporate Information and Computing Services The University of Sheffield
Software Tools Using PBS. Software tools Portland compilers pgf77 pgf90 pghpf pgcc pgCC Portland debugger GNU compilers g77 gcc Intel ifort icc.
Cluster Computing Applications for Bioinformatics Thurs., Sept. 20, 2007 process management shell scripting Sun Grid Engine running parallel programs.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
Submitting Jobs to the Sun Grid Engine at Sheffield and Leeds (Node1)
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Debugging Lab Antonio Gómez-Iglesias Texas Advanced Computing Center.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Requesting Resources on an HPC Facility Michael Griffiths and Deniz Savas Corporate Information and Computing Services The University of Sheffield
Using ROSSMANN to Run GOSET Studies Omar Laldin ( using materials from Jonathan Crider, Harish Suryanarayana ) Feb. 3, 2014.
Introduction to High Performance Computing Michael Griffiths and Deniz Savas Corporate Information and Computing Services The University of Sheffield
Grid Computing: An Overview and Tutorial Kenny Daily BIT Presentation 22/09/2016.
1 High-Performance Grid Computing and Research Networking Presented by Javier Delgodo Slides prepared by David Villegas Instructor: S. Masoud Sadjadi
Advanced Computing Facility Introduction
Welcome to Indiana University Clusters
PARADOX Cluster job management
HPC usage and software packages
OpenPBS – Distributed Workload Management System
Using Paraguin to Create Parallel Programs
BIMSB Bioinformatics Coordination
Paul Sexton CS 566 February 6, 2006
Bruce Pullig Solution Architect
Compiling and Job Submission
CCR Advanced Seminar: Running CPLEX Computations on the ISE Cluster
Requesting Resources on an HPC Facility
Sun Grid Engine.
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

Software stack on the worker nodes AMD Opteron/Intel Westmere Redhat 64bit Scientific Linux Portland, GNU,Intel OpenMPI Sun Grid Engine v6 Ganglia

Keeping up-to-date with application packages From time to time the application packages get updated. When this happens; –news articles inform us of the changes via iceberg news. To read the news- type news on iceberg or check the URL: –The previous versions or the new test versions of software are normally accessed via the version number. For example; abaqus69, nagexample23, matlab2011a

Running application packages in batch queues icberg have locally provided commands for running some of the popular applications in batch mode. These are; runfluent, runansys, runmatlab, runabaqus To find out more just type the name of the command on its own while on iceberg.

Setting up your software development environment Excepting the scratch areas on worker nodes, the view of the filestore is identical on every worker. You can setup your software environment for each job by means of the module commands. All the available software environments can be listed by using the module avail command. Having discovered what software is available on iceberg, you can then select the ones you wish to use by using- module add or module load commands You can load as many non-clashing modules as you need by consecutive module add commands. You can use the module list command to check the list of currently loaded modules.

Software development environment Compilers –PGI compilers –Intel Compilers –GNU compilers Libraries –NAG Fortran libraries ( MK22, MK23 ) –NAG C libraries ( MK 8 ) Parallel programming related –OpenMP –MPI ( OpenMPI, mpich2, mvapich )

Managing Your Jobs Sun Grid Engine Overview SGE is the resource management system, job scheduling and batch control system. ( Others available such as PBS, Torque/Maui, Platform LSF ) Starts up interactive jobs on available workers Schedules all batch orientated ‘i.e. non-interactive’ jobs Attempts to create a fair-share environment Optimizes resource utilization

Job scheduling on the cluster SGE worker node SGE MASTER node Queue-AQueue-BQueue-C A Slot 1A Slot 2B Slot 1C Slot 1C Slot 2C Slot 3B Slot 1B Slot 2B Slot 3 B Slot 1 C Slot 1 C Slot 2A Slot 1B Slot 1C Slot 1  Queues  Policies  Priorities  Share/Tickets  Resources  Users/Projects JOB YJOB ZJOB XJOB UJOB OJOB N

Job scheduling on the cluster SGE worker node SGE MASTER node Queue-AQueue-BQueue-C Slot 1Slot 2Slot 3Slot 1Slot 2Slot 3Slot 1Slot 2Slot 3 Slot 1 Slot 2Slot 3Slot 1Slot 2Slot 3  Queues  Policies  Priorities  Share/Tickets  Resources  Users/Projects JOB X

Submitting your job There are two SGE commands submitting jobs; –qsh or qrsh : To start an interactive job –qsub : To submit a batch job There are also a list of home produced commands for submitting some of the popular applications to the batch system. They all make use of the qsub command. These are; runfluent, runansys, runmatlab, runabaqus

Managing Jobs monitoring and controlling your jobs There are a number of commands for querying and modifying the status of a job running or waiting to run. These are; –qstat or Qstat (query job status) –qdel (delete a job) –qmon ( a GUI interface for SGE )

Running Jobs Example: Submitting a serial batch job Use editor to create a job script in a file (e.g. example.sh): #!/bin/bash # Scalar benchmark echo ‘This code is running on’ `hostname` date./linpack Submit the job: qsub example.sh

Running Jobs qsub and qsh options -l h_rt=hh:mm:ssThe wall clock time. This parameter must be specified, failure to include this parameter will result in the error message: “Error: no suitable queues”. Current default is 8 hours. -l arch=intel* -l arch=amd* Force SGE to select either Intel or AMD architecture nodes. No need to use this parameter unless the code has processor dependency. -l mem=memorysets the virtual-memory limit e.g. –l mem=10G (for parallel jobs this is per processor and not total). Current default if not specified is 6 GB. -l rmem=memorySets the limit of real-memory required Current default is 2 GB. Note: rmem parameter must always be less than mem. -helpPrints a list of options -pe ompigige np -pe openmpi-ib np -pe openmp np Specifies the parallel environment to be used. np is the number of processors required for the parallel job.

Running Jobs qsub and qsh options ( continued) -N jobname By default a job’s name is constructed from the job-script-file- name and the job-id that is allocated to the job by SGE. This options defines the jobname. Make sure it is unique because the job output files are constructed from the jobname. -o output_file Output is directed to a named file. Make sure not to overwrite important files by accident. -j y Join the standard output and standard error output streams recommended -m [bea] -M -address Sends s about the progress of the job to the specified address. If used, both –m and –M must be specified. Select any or all of the b,e and a to imply ing when the job begins, ends or aborts. -P project_name Runs a job using the specified projects allocation of resources. -S shell Use the specified shell to interpret the script rather than the default bash shell. Use with care. A better option is to specify the shell in the first line of the job script. E.g. #!/bin/bash -V Export all environment variables currently in effect to the job.

Running Jobs batch job example qsub example: qsub –l h_rt=10:00:00 –o myoutputfile –j y myjob OR alternatively … the first few lines of the submit script myjob contains - $!/bin/bash $# -l h_rt=10:00:00 $# -o myoutputfile $# -j y and you simply type; qsub myjob

Running Jobs Interactive jobs qsh, qrsh These two commands, find a free worker-node and start an interactive session for you on that worker-node. This ensures good response as the worker node will be dedicated to your job. The only difference between qsh and qrsh is that ; qsh starts a session in a new command window where as qrsh uses the existing window. Therefore, if your terminal connection does not support graphics ( i.e. XWindows) than qrsh will continue to work where as qsh will fail to start.

Running Jobs A note on interactive jobs Software that requires intensive computing should be run on the worker nodes and not the head node. You should run compute intensive interactive jobs on the worker nodes by using the qsh or qrsh command. Maximum ( and also default) time limit for interactive jobs is 8 hours.

Managing Jobs Monitoring your jobs by qstat or Qstat Most of the time you will be interested only in the progress of your own jobs thro’ the system. Qstat command gives a list of all your jobs ‘interactive & batch’ that are known to the job scheduler. As you are in direct control of your interactive jobs, this command is useful mainly to find out about your batch jobs ( i.e. qsub’ed jobs). qstat command ‘note lower-case q’ gives a list of all the executing and waiting jobs by everyone. Having obtained the job-id of your job by using Qstat, you can get further information about a particular job by typing ; qstat –f -j job_id You may find that the information produced by this command is far more than you care for, in that case the following command can be used to find out about memory usage for example; qstat –f -job_id | grep usage

Managing Jobs qstat example output State can be: r=running, qw=waiting in the queue, E=error state. t=transfering’just before starting to run’ h=hold waiting for other jobs to finish. job-ID prior name user state submit/start at queue slots ja-task-ID INTERACTIV bo1mrl r 07/05/ :30: do_batch4 pc1mdh r 07/04/ :28: l m mb1nam r 07/04/ :30: l ma mb1nam r 07/04/ :29: l ma mb1nam r 07/04/ :29: do_batch1 pc1mdh r 07/04/ :49: l m mb1nam r 07/04/ :30: l sp mb1nam r 07/04/ :42: l ma mb1nam r 07/04/ :29: job_optim2 mep02wsw r 07/03/ :55: mrbayes.sh bo1nsh qw 07/02/ :22: fluent cpp02cg qw 07/04/ :19:06 4

Managing Jobs Deleting/cancelling jobs qdel command can be used to cancel running jobs or remove from the queue the waiting jobs. To cancel an individual Job; qdel job_id Example: qdel To cancel a list of jobs qdel job_id1, job_id2, so on … To cancel all jobs belonging to a given username qdel –u username

Managing Jobs Job output files When a job is queued it is allocated a jobid ( an integer ). Once the job starts to run normal output is sent to the output (.o) file and the error o/p is sent to the error (.e) file. ▬ The default output file name is:.o ▬ The default error o/p filename is:.e If -N parameter to qsub is specified the respective output files become.o and.e –o or –e parameters can be used to define the output files to use. -j y parameter forces the job to send both error and normal output into the same (output) file (RECOMMENDED )

Monitoring the job output files The following is an example of submitting a SGE job and checking the output produced qsub myjob.sh job submitted qstat –f –j (is the job running ?) When the job starts to run, type ; tail –f myjob.sh.o123456

Problem Session Problem 10 –Only up to test 5

Managing Jobs Reasons for job failures –SGE cannot find the binary file specified in the job script –You ran out of file storage. It is possible to exceed your filestore allocation limits during a job that is producing large output files. Use the quota command to check this. –Required input files are missing from the startup directory –Environment variable is not set correctly (LM_LICENSE_FILE etc) –Hardware failure (eg. mpi ch_p4 or ch_gm errors)

Finding out the memory requirements of a job Virtual Memory Limits: ▬ Default virtual memory limits for each job is 6 GBytes ▬ Jobs will be killed if virtual memory used by the job exceeds the amount requested via the –l mem= parameter. Real Memory Limits: ▬ Default real memory allocation is 2 GBytes ▬ Real memory resource can be requested by using –l rmem= ▬ Jobs exceeding the real memory allocation will not be deleted but will run with reduced efficiency and the user will be ed about the memory deficiency. ▬ When you get warnings of that kind, increase the real memory allocation for your job by using the –l rmem= parameter. ▬ rmem must always be less than mem Determining the virtual memory requirements for a job; ▬ qstat –f –j jobid | grep mem ▬ The reported figures will indicate - the currently used memory ( vmem ) - Maximum memory needed since startup ( maxvmem) - cumulative memory_usage*seconds ( mem ) ▬ When you run the job next you need to use the reported value of vmem to specify the memory requirement

Managing Jobs Running arrays of jobs Add the –t parameter to the qsub command or script file (with #$ at beginning of the line) –Example: –t 1-10 This will create 10 tasks from one job Each task will have its environment variable $SGE_TASK_ID set to a single unique value ranging from 1 to 10. There is no guarantee that task number m will start before task number n, where m<n.

Managing Jobs Running cpu-parallel jobs Parallel environment needed for a job can be specified by the: -pe nn parameter of qsub command, where is.. ▬ openmp : These are shared memory OpenMP jobs and therefore must run on a single node using its multiple processors. ▬ ompigige : OpenMPI library- Gigabit Ethernet. These are MPI jobs running on multiple hosts using the ethernet fabric ( 1Gbit/sec) ▬ openmpi-ib : OpenMPI library-Infiniband. These are MPI jobs running on multiple hosts using the Infiniband Connection ( 32GBits/sec ) ▬ mvapich2-ib : Mvapich-library-Infiniband. As above but using the MVAPICH MPI library. Compilers that support MPI. ▬ PGI ▬ Intel ▬ GNU

Setting up the parallel environment in the job script. Having selected the parallel environment to use via the qsub – pe parameter, the job script can define a corresponding environment/compiler combination to be used for MPI tasks. MODULE commands help setup the correct compiler and MPI transport environment combination needed. List of currently available MPI modules are; mpi/pgi/openmpi mpi/pgi/mvapich2 mpi/intel/openmpi mpi/intel/mvapich2 mpi/gcc/openmpi mpi/gcc/mvapich2 For GPU programming with CUDA libs/cuda Example: module load mpi/pgi/openmpi

Summary of module load parameters for parallel MPI environments TRANSPORT qsub parameter COMPILER to use -pe openmpi-ib -pe ompigige -pe mvapich2-ib PGImpi/pgi/openmpimpi/pgi/mvapich2 Intelmpi/intel/openmpimpi/intel/mvapich2 GNUmpi/gcc/openmpimpi/gcc/mvapich2

Running GPU parallel jobs GPU parallel processing is supported on 8 Nvidia Tesla Fermi M2070s GPU units attached to iceberg. In order to use the GPU hardware you will need to join the GPU project by ing You can then submit jobs that use the GPU facilities by using the following three parameters to the qsub command; -P gpu -l arch=intel* -l gpu=nn where 1<= nn <= 8 is the number of gpu-modules to be used by the job. P stands for project that you belong to. See next slide.

Special projects and resource management Bulk of the iceberg cluster is shared equally amongst the users. I.e. each user has the same privileges for running jobs as another user. However, there are extra nodes connected to the iceberg cluster that are owned by individual research groups. It is possible for new research project groups to purchase their own compute nodes/clusters and make it part of the iceberg cluster. We define such groups as special project groups in SGE parlance and give priority to their jobs on the machines that they have purchased. This scheme allows such research groups to use iceberg as ordinary users “with equal share to other users” or to use their privileges ( via the –P parameter) to run jobs on their own part of the cluster without having to compete with other users. This way, everybody benefits as the machines currently unused by a project group can be utilised to run normal users’ short jobs.

Job queues on iceberg Queue nameTime Limit (Hours) System specification short.q 8 long.q168Long running serial jobs parallel.q168Jobs requiring multiple nodes openmp.q168Shared memory jobs using openmp gpu.q168Jobs using the gpu units

Getting help Web site – Documentation – Training (also uses the learning management system) – Uspace – Contacts –