Getting started on the Cray XE6 Beagle Beagle Team Computation.

Slides:



Advertisements
Similar presentations
Matlab on the Cray XE6 Beagle Beagle Team Computation.
Advertisements

Profiling your application with Intel VTune at NERSC
Using the Argo Cluster Paul Sexton CS 566 February 6, 2006.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Southgreen HPC system Concepts Cluster : compute farm i.e. a collection of compute servers that can be shared and accessed through a single “portal”
Running Jobs on Jacquard An overview of interactive and batch computing, with comparsions to Seaborg David Turner NUG Meeting 3 Oct 2005.
ISG We build general capability Job Submission on the Olympus Cluster J. DePasse; S. Brown, PhD; T. Maiden Pittsburgh Supercomputing Center Public Health.
Job Submission on WestGrid Feb on Access Grid.
Quick Tutorial on MPICH for NIC-Cluster CS 387 Class Notes.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
HPCC Mid-Morning Break Interactive High Performance Computing Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
ISG We build general capability Purpose After this tutorial, you should: Be comfortable submitting work to the batch queuing system of olympus and be familiar.
1 Introduction to Tool chains. 2 Tool chain for the Sitara Family (but it is true for other ARM based devices as well) A tool chain is a collection of.
Introduction to UNIX/Linux Exercises Dan Stanzione.
NERCS Users’ Group, Oct. 3, 2005 NUG Training 10/3/2005 Logistics –Morning only coffee and snacks –Additional drinks $0.50 in refrigerator in small kitchen.
The Cray XE6 Beagle Beagle Team Computation Institute.
Introduction to HP LoadRunner Getting Familiar with LoadRunner >>>>>>>>>>>>>>>>>>>>>>
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
 Accessing the NCCS Systems  Setting your Initial System Environment  Moving Data onto the NCCS Systems  Storing Data on the NCCS Systems  Running.
1 COMPSCI 110 Operating Systems Who - Introductions How - Policies and Administrative Details Why - Objectives and Expectations What - Our Topic: Operating.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Home Media Network Hard Drive Training for Update to 2.0 By Erik Collett Revised for Firmware Update.
Introduction to HPC resources for BCB 660 Nirav Merchant
5 Chapter Five Web Servers. 5 Chapter Objectives Learn about the Microsoft Personal Web Server Software Learn how to improve Web site performance Learn.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
The Pipeline Processing Framework LSST Applications Meeting IPAC Feb. 19, 2008 Raymond Plante National Center for Supercomputing Applications.
VIPBG LINUX CLUSTER By Helen Wang March 29th, 2013.
Bigben Pittsburgh Supercomputing Center J. Ray Scott
Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems CSCI-6140 – Computer Operating Systems David Goldschmidt, Ph.D.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
Globus online Reliable, high-performance file transfer… made easy. XSEDE ECSS Symposium, Dec.12, 2011 Presenter: Steve Tuecke, Deputy Director Computation.
CE Operating Systems Lecture 3 Overview of OS functions and structure.
HPC for Statistics Grad Students. A Cluster Not just a bunch of computers Linked CPUs managed by queuing software – Cluster – Node – CPU.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Software Overview Environment, libraries, debuggers, programming tools and applications Jonathan Carter NUG Training 3 Oct 2005.
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
Running Parallel Jobs Cray XE6 Workshop February 7, 2011 David Turner NERSC User Services Group.
ARCHER Advanced Research Computing High End Resource
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Portable Batch System – Definition and 3 Primary Roles Definition: PBS is a distributed workload management system. It handles the management and monitoring.
Advanced topics Cluster Training Center for Simulation and Modeling September 4, 2015.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
Cliff Addison University of Liverpool NW-GRID Training Event 26 th January 2007 SCore MPI Taking full advantage of GigE.
Wouter Verkerke, NIKHEF 1 Using ‘stoomboot’ for NIKHEF-ATLAS batch computing What is ‘stoomboot’ – Hardware –16 machines, each 2x quad-core Pentium = 128.
Active-HDL Server Farm Course 11. All materials updated on: September 30, 2004 Outline 1.Introduction 2.Advantages 3.Requirements 4.Installation 5.Architecture.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
An Brief Introduction Charlie Taylor Associate Director, Research Computing UF Research Computing.
Advanced Computing Facility Introduction
Accessing the VI-SEEM infrastructure
GRID COMPUTING.
Specialized Computing Cluster An Introduction
PARADOX Cluster job management
HPC usage and software packages
Welcome to Indiana University Clusters
How to use the HPCC to do stuff
Chapter 2: System Structures
Architecture & System Overview
CRESCO Project: Salvatore Raia
LQCD Computing Operations
Cray Announces Cray Inc.
湖南大学-信息科学与工程学院-计算机与科学系
Compiling and Job Submission
Introduction to High Performance Computing Using Sapelo2 at GACRC
Quick Tutorial on MPICH for NIC-Cluster
Working in The IITJ HPC System
Presentation transcript:

Getting started on the Cray XE6 Beagle Beagle Team Computation Institute University of Chicago & Argonne National Laboratory

2 Intro to Beagle – Outline What is the Computation Institute? Beagle hardware Basics about the work environment Data transfer using Globus Online Use of the compilers (C, C++, and Fortran) Launch of a parallel application Job monitoring Introduction to debugger and profiler Introduction to Parallel scripting with Swift

Computation Institute Director: Ian Foster Contact:

4 Intro to Beagle – Computation Institute Joint Argonne/Chicago institute, with ~100 Fellows (~50 UChicago faculty) and ~60 staff Primary goals: – Pursue new discoveries using multi-disciplinary collaborations and computational methods – Develop new computational methods and paradigms required to tackle these problems, and create the computational tools required for the effective application of advanced methods at the largest scales – Educate the next generation of investigators in the advanced methods and platforms required for discovery

5 Intro to Beagle – How the CI supports people who use Beagle Catalyst Program Help Desk Support Training & Outreach Performance Engineering Startup assistance User administration assistance Job management services Technical support Beagle Services User campaign management Assistance with planning, reporting Collaboration within science domains Beagle point of coordination Performance engineering Application tuning Data analytics I/O tuning Workshops & seminars Customized training programs On-line content & user guides Beagle’s wiki * Beagle’s web page ** * **

6 Intro to Beagle – Beagle: hardware overview

7 Intro to Beagle – Beagle “under the hood”

8 Intro to Beagle – Beagle Cray XE 6 system overview Compute nodes # 736 Not directly accessible Where computations are performed Compute nodes # 736 Not directly accessible Where computations are performed Login nodes # 2 Accessible Where jobs are submitted Sandbox node # 1 Accessible Compilation, script design… Service nodes: Network access Scheduler I/O … To know more: To know more:

9 Intro to Beagle – Compute nodes 2 AMD Opteron 6100 “Magny-Cours” 12-core (24 per node) 2.1-GHz 32 GB RAM (8 GB per processor) No disk on node (mounts DVS and Lustre network filesystems) Compute nodes 2 AMD Opteron 6100 “Magny-Cours” 12-core (24 per node) 2.1-GHz 32 GB RAM (8 GB per processor) No disk on node (mounts DVS and Lustre network filesystems)

Intro to Beagle – Details about the Processors (sockets) Superscalar: 3 Integer ALUs 3 Floating point ALUs (can do 4 FP per cycle) Cache hierarchy: Victim cache 64KB L1 instruction cache 64KB L1 data cache (latency 3 cycles) 512KB L2 cache per processor core (latency of 9 cycles) 12MB shared L3 cache (latency 45 cycles) To know more: To know more:

Intro to Beagle – Interconnect Communication between compute nodes and with service nodes Gemini Interconnect 2 nodes per Gemini ASIC 4 x 12-cores (48 per Gemini) Gemini are arranged in a 3D torus Latency ~ 1 μs 168 GB/s bandwidth of switching capacity (20 GB injection per node) Resilient design Interconnect Communication between compute nodes and with service nodes Gemini Interconnect 2 nodes per Gemini ASIC 4 x 12-cores (48 per Gemini) Gemini are arranged in a 3D torus Latency ~ 1 μs 168 GB/s bandwidth of switching capacity (20 GB injection per node) Resilient design To know more: To know more:

Intro to Beagle – Steps for computing on Beagle You need a user id on Beagle You need an active project You need to understand the basics of how the system works (check files, move files, create directories) You need to move your data to Beagle The application(s) that perform the calculations need to be installed on Beagle You need to submit and monitor your jobs to the compute nodes You need to transfer your data back to your system

Intro to Beagle – What you need to get started on Beagle A CI account: if you don’t have it, get one – – You will need some person at the CI to sponsor you, this person can be: o Your PI, if he or she is part of the CI o A collaborator that is part of the CI o A catalyst you will be working with A CI project (for accounting) – o For joining an HPC project o For creating a new HPC project – This will change later this year, to let allocations committee make decisions To know more about CI account and HPC basics – To know more about Beagle accounts and basics –

Intro to Beagle – Basics on using Beagle Login – ssh to login.beagle.ci.uchicago.edu to submit jobs – ssh to sandbox.beagle.ci.uchicago.edu for CPU-intensive development and interactive operations – To know more: Data transfer – For small files scp or sftp – GridFTP to gridftp.beagle.ci.uchicago.edu – Or use Globus Online (coming later in the talk) – To know more: How to receive personalized support –

Intro to Beagle – Beagle’s operating system Cray XE6 uses Cray Linux Environment v3 (CLE3) SuSE Linux-based Compute nodes use Compute Node Linux (CNL) Login and sandbox nodes use a more standard Linux The two are different. Compute nodes can operate in – ESM (extreme scalability mode) to optimize performance to large multi-node calculations – CCM (cluster compatibility mode) for out-of-the-box compatibility with Linux/ x86 versions of software – without recompilation or relinking! To know more: To know more:

Intro to Beagle – Modules and work environment Modules sets the environment necessary to use a specific to applications, collection of applications, or libraries A module dynamically modifies the user environment The module command provides a number of capabilities including: – loading a module (module load) – unloading a module (module unload) – unloading a module and loading another (module swap) – listing which modules are loaded (module list) – determining which modules are available (module avail) To know more: To know more:

Intro to Beagle – Beagle’s filesystems /lustre/beagle: local Lustre filesystem (read-write -- this is where batch jobs should do most of their I/O. NO BACKUP!) /gpfs/pads: PADS GPFS (read-write) – for permanent storage /home: CI home directories (read-only on compute nodes) USE LUSTRE ONLY for I/O on compute nodes: – It is considerably faster than other filesystems – Use of other filesystems can affect seriously performance as they rely on network and I/O external to Beagle /soft, /tmp, /var, /opt, /dev, … usually you won’t need to worry about those To know more: To know more:

Intro to Beagle – The Lustre filesystem The I/O during computation should be done through the high performance Lustre Lustre is mounted as /lustre/beagle Users have to create their own directory on Lustre. This is done to give them more freedom in how to set it up (naming, privacy …) To know more: To know more:

Intro to Beagle – Lustre performance: striping Files in the Lustre filesystem are striped by default: split up into pieces and sent to different disks. This parallelization of the I/O allows the user to use more disks at the same time and may give them a higher bandwidth for I/O if used properly. Usually good values are between one and four. Higher values might be better for specific applications, but this is not likely. To know more: To know more:

Intro to Beagle – Lustre basic commands lfs df — system configuration information lfs setstripe — create a file or directory with a specific striping pattern lfs getstripe — display file striping patterns lfs find [directory | file name] — find a file or directory Try typing: man lfs

Intro to Beagle – How to move data to and from Beagle Beagle is not HIPAA-compliant — do not put PHI data on Beagle Example of factors for choosing a data movement tool: – how many files, how large the files are … – how much fault tolerance is desired, – performance – security requirements, and – the overhead needed for software setup. Recommended tools: – scp/sftp can be OK for moving a few small files o pros: quick to initiate o cons: slow and not scalable – For optimal speed and reliability we recommend Globus Online : o high-performance (e.g., fast) o reliable and easy to use o easy to use from either a command line or web browser, o provides fault tolerant, fire-and-forget transfers. If you know you'll be moving a lot of data or find scp is too slow/unreliable we recommend To know more: To know more:

Intro to Beagle – Trivial, right? 22 “I need my data over there – at my _____” (supercomputing center, campus server, etc.) Data Source Data Destination Getting data to the right place…

Intro to Beagle – Reality: it is tedious and time-consuming 23 Data Source Data Destination “GAAAH! What’s the big deal?

How It Works 24 Data Source Data Source Data Destination Data Destination User initiates transfer request 1 1 Globus Online moves files 2 2 Globus Online notifies user 3 3 How It Works

How It Works 25 Getting Started (2 easy steps) 1.Sign up: Visit to create an accountwww.globusonline.org

How It Works 26 Getting Started (2 easy steps) 2.Start moving files: Pick your data and where you want to move it, then click to transfer

How It Works 27 File Movement Options We strive to make Globus Online broadly accessible… You can just move files using the Web GUI To automate workflows you use the Command Line Interface (CLI) To know more: (quickstart, tutorials, FAQs …) To know more: (quickstart, tutorials, FAQs …)

Intro to Beagle – Steps for computing on Beagle You need a user id on Beagle You need an active project You need to understand the basics of how the system works (check files, move files, create directories) You need to move your data to Beagle The application(s) that perform the calculations need to be installed on Beagle You need to submit and monitor your jobs to the compute nodes You need to transfer your data back to your system ✔ ✔ ✔ ✔ ✔

Intro to Beagle – Applications on Beagle Applications on Beagle are run from the command line, e.g.: aprun –n myMPIapp & this.log How do I know if an application is on Beagle? – – – Use module avail, e.g.: module avail 2>&1 | grep -i namd gromacs/4.5.3(default) namd/2.7(default) What if it isn’t there? What if I want to use my own application?

Intro to Beagle – If you need a tool that isn’t on Beagle For any specific requirements, submit a ticket to with the following information: Research project, group and/or PI Name(s) of software packages(s) Intended use and/or purpose Licensing requirements (if applicable) Specific instructions or preferences (specific release/version/vendor, associated packages, URLs for download, etc.)

Intro to Beagle – Porting software to Beagle: modules module list Currently Loaded Modulefiles: 1) modules/ ) nodestat/ gem : 12) xtpe-network-gemini 13) pgi/ ) xt-libsci/ ) pmi/ gem 16) xt-mpich2/ ) xt-asyncpe/4.8 18) atp/ ) PrgEnv-pgi/ ) xtpe-mc12 21) torque/ ) moab/5.4.1 PrgEnv-xxxx refers to the programming environment currently loaded Default is PGI (Portland Group compilers) PrgEnv-xxxx refers to the programming environment currently loaded Default is PGI (Portland Group compilers)

Intro to Beagle – Compilation environment module avail PrgEnv module avail 2>&1 | grep PrgEnv PrgEnv-cray/1.0.2 PrgEnv-cray/3.1.49A PrgEnv-cray/3.1.61(default) PrgEnv-gnu/3.1.49A PrgEnv-gnu/3.1.61(default) PrgEnv-pgi/3.1.49A PrgEnv-pgi/3.1.61(default) Cray compilers -Excellent Fortran -CAF and UPC Cray compilers -Excellent Fortran -CAF and UPC Gnu compilers -Excellent C - Standard Gnu compilers -Excellent C - Standard PGI compilers -Excellent Fortran -Reliable PGI compilers -Excellent Fortran -Reliable We will soon have also Pathscale compilers We will soon have also Pathscale compilers

Intro to Beagle – Compiling on Beagle Compilers are called – cc for a C compiler – CC for a C++ compiler – ftn for a Fortran compiler Do not use gcc, gfortran … those commands will produce an executable for the sandbox node! CC, cc, ftn, etc. … are cross-compilers (driver scripts) and produce code to be run on the compute nodes To know more: To know more:

Intro to Beagle – Compiling on Beagle: environment set up Move your source files to Beagle Select a compiler and load it e.g., module swap PrgEnv-pgi PrgEnv-gcc Determine whether additional libraries are required and whether – Native, optimized versions are available for the Cray XE6 under “Math and Science Libraries” – For a list of all libraries installed on Beagle use: module avail 2>&1 | less Load the required libraries e.g., FFTW, via module load fftw

Intro to Beagle – User’s guides and man pages PGI: – – Or type man pgf90, man pgcc, man pgCC GCC: – – Or type man gfortran, man gcc, man g++ Cray: under “Programming Environment” – – Or type man crayftn, man craycc, man crayc++ Pathscale: – – Or type man pathf90, man pathcc, man pathCC, man eko

Intro to Beagle – More details about the environment Beagle can use both statically and dynamically linked (shared) libraries All compilers on Beagle support: – MPI (Message Passing Interface, standard for distributed computing) and – OpenMP (standard for shared memory computing). Note: flags activating openMP pragmas or directives might be different among compilers, see man pages. Some compilers support also PGAS languages (e.g., CAF or UPC), for example the Cray compilers

Intro to Beagle – Steps for computing on Beagle You need a user id on Beagle You need an active project You need to understand the basics of how the system works (check files, move files, create directories) You need to move your data to Beagle The application(s) that perform the calculations need to be installed on Beagle You need to submit and monitor your jobs to the compute nodes You need to transfer your data back to your system ✔ ✔ ✔ ✔ ✔ ✔

Intro to Beagle – On running jobs on compute nodes The system operates through a resource manager (Torque) and a scheduler (Moab) Beagle CLE (Cray Linux Environment) supports both interactive and batch computations When running applications on the compute nodes, it is best to work from the login nodes (as opposed to the sandbox node, which is better used to develop) It is not possible to log in on the compute nodes

Intro to Beagle – Launching an application on compute nodes They are all usually part of a PBS (Portable Batch System) script: The first step is to obtain resources which utilizes the qsub command The second step is to set the appropriate environment to run the calculations The third step is to move input files, personal libraries and applications to the Lustre file system The fourth step is to run the application on the compute nodes using the application launcher ( aprun ) The final step is to move files back to /home or /gpfs/pads/projects

Intro to Beagle – First step: request resources with qsub Users cannot access compute nodes without a resource request managed by Torque/Moab That is, you will always need to use qsub Typical calls to qsub are: – For an interactive job qsub -I -l walltime=00:10:00,mppwidth=24 – for a batch job qsub my_script.pbs

Intro to Beagle – Interactive When you run interactive jobs you will see a qsub prologue: qsub -I –l walltime=00:10:00,mppwidth=24 qsub: waiting for job sdb to start qsub: job sdb ready ############################# Beagle Job Start ################## # Job ID: Project: CI-CCR # # Start time: Tue Jul 26 12:23:14 CDT 2011 # # Resources: walltime=00:10:00 # ############################################################## After you receive a prompt, you can run your jobs via aprun: aprun –n 24 myjob.exe & my_log aprun –n 24 myjob2.exe & my_log2 Good for debugging and small tests Limited to one node (24 cores) Good for debugging and small tests Limited to one node (24 cores)

Intro to Beagle – Batch scripts Batch scheduling is usually done with a PBS script Scripts can be very complex (see following talk about Swift) Note: the script is executed on the login node! Only what follows the aprun command is run on the compute nodes We’ll look into simple scripts To know more: To know more:

Intro to Beagle – Example of an MPI script !/bin/bash #PBS -N MyMPITest #PBS -l walltime=1:00:00 #PBS -l mppwidth=240 #PBS -j oe #Move to the directory where the script was submitted -- by the qsub command cd $PBS_O_WORKDIR # Define and create a directory on /lustre/beagle where to run the job LUSTREDIR=/lustre/beagle/`whoami`/MyMPITest/${PBS_JOBID} echo $LUSTREDIR mkdir -p $LUSTREDIR # Copy the input file and executable to /lustre/beagle cp /home/lpesce/tests/openMPTest/src/hello_smp hello.in $LUSTREDIR # Move to /lustre/beagle cd $LUSTREDIR # Note that here I was running hello_smp on 240 cores, i.e., using 240 PEs (by using -n 240) # each with 1 thread -- i.e., just itself (default by not using -d) aprun -n 240 hello_smp hello.out3 Set shell (I use bash) Give a name to the job Set wall time to 1 hr (hh:mm:ss) Ask to merge err and output from the scheduler Set shell (I use bash) Give a name to the job Set wall time to 1 hr (hh:mm:ss) Ask to merge err and output from the scheduler $PBS_O_WORKDIR: directory from where the script was submitted Name, output and make a directory on lustre $PBS_O_WORKDIR: directory from where the script was submitted Name, output and make a directory on lustre Move all the files that will be used to lustre Go to lustre Move all the files that will be used to lustre Go to lustre Use aprun to send the computation to the compute nodes -n 240 asks for 240 MPI processes Use aprun to send the computation to the compute nodes -n 240 asks for 240 MPI processes

Intro to Beagle – Example of an openMP script #!/bin/bash #PBS -N MyOMPTest #PBS -l walltime=48:00:00 #PBS -l mppwidth=24 #PBS -j oe #Move to the directory where the script was submitted -- by the qsub command cd $PBS_O_WORKDIR # Define and create a directory on /lustre/beagle where to run the job LUSTREDIR=/lustre/beagle/`whoami`/MyTest/${PBS_JOBID} echo $LUSTREDIR mkdir -p $LUSTREDIR # Copy the input file and executable to /lustre/beagle, these have to be user and project specific cp /home/lpesce/tests/openMPTest/src/hello_smp hello.in $LUSTREDIR # Move to /lustre/beagle cd $LUSTREDIR # Note that here I was running one PE (by using -n 1) # each with 24 threads (by using -d 24) # Notice the setting of the environmental variable OMP_NUM_THREADS for openMP # if other multi-threading approaches are used they might need to be handled differently OMP_NUM_THREADS=24 aprun -n 1 -d 24./hello_smp hello.out4 Set shell (I use bash) Give a name to the job Set wall time to max: 48 hrs (hh:mm:ss) Ask to merge err and output from the scheduler Set shell (I use bash) Give a name to the job Set wall time to max: 48 hrs (hh:mm:ss) Ask to merge err and output from the scheduler $PBS_O_WORKDIR: directory from where the script was submitted Name, output and make a directory on lustre $PBS_O_WORKDIR: directory from where the script was submitted Name, output and make a directory on lustre Move all the files that will be used to lustre Go to lustre Move all the files that will be used to lustre Go to lustre Use aprun to send the computation to the compute nodes First set environmental variable OMP_NUM_THREADS to desired value (24 is rarely optimal!) -d 24 asks for 24 OMP processes per MPI process -n 1 asks for only one MPI process Use aprun to send the computation to the compute nodes First set environmental variable OMP_NUM_THREADS to desired value (24 is rarely optimal!) -d 24 asks for 24 OMP processes per MPI process -n 1 asks for only one MPI process

Intro to Beagle – Queue Name Max Walltime Max # nodes Default # nodes Max # jobs in queue Total # Reserved nodes Interactive 4 hour1118 development 30 min31216 scalability 30 min1014 batch 2 days none1744 N/A Recommended as second step, after the code compiles and runs using the interactive queue on one node To test parallelism on a small scale Up to 3 nodes.. Provides dedicated resources to efficiently optimize and test parallelism Recommended as second step, after the code compiles and runs using the interactive queue on one node To test parallelism on a small scale Up to 3 nodes.. Provides dedicated resources to efficiently optimize and test parallelism Default queue, to run all the rest Recap of queues available on Beagle Recommended as first step in porting applications to Beagle To test and debug code in real time. On one node. Provides dedicated resources to run continuous refinement sessions Recommended as first step in porting applications to Beagle To test and debug code in real time. On one node. Provides dedicated resources to run continuous refinement sessions Recommended as third step after parallelism was tested on a small scale Up to 10 nodes.. Provides dedicated resources to efficiently test and refine scalability Recommended as third step after parallelism was tested on a small scale Up to 10 nodes.. Provides dedicated resources to efficiently test and refine scalability To know more: To know more:

Intro to Beagle – More about aprun The number of processors, both for MPI and openMP, is determined at launch time by the aprun command (more or less that is) The aprun application launcher handles stdin, stdout and strerr for the user’s application To know more: Or type man aprun To know more: Or type man aprun

Intro to Beagle – To monitor applications and queues qsub batch jobs are submitted using the qsub command qdel is used to delete a job qstat shows the jobs the resource manager, Torque, knows about (i.e., all those submitted using qsub). – qstat -a show all jobs in submit order – qstat -a -u username show all jobs of a specific user in submit order – qstat -f job_id receive a detailed report on the job status – qstat -n job_id what nodes is a job running on – qstat -q gives the list of the queues available on Beagle showq show all jobs in priority order. showq tells which jobs Moab, the scheduler, is considering eligible to run or is running showres showres show all the reservations currently in place or that have been scheduled To know more: To know more:

Intro to Beagle – Acknowledgments BSD for funding most of the operational costs of Beagle A lot of the images and the content has been taken or learned from Cray documentation or their staff Globus for providing us with many slides and support; special thanks to Mary Bass, manager for communications and outreach at the CI. NERSC and its personnel provided us with both material and direct instruction; special thanks to Katie Antypas, group leader of the User Services Group at NERSC All the people at the CI who supported our work, from administrating the facilities to taking pictures of Beagle

Thanks! We look forward to working with you. Questions? (or later: