Download presentation
Presentation is loading. Please wait.
Published byCassandra Mason Modified over 9 years ago
1
Using Kure and Topsail Mark Reed Grant Murphy Charles Davis ITS Research Computing
2
2 Compute Clusters Topsail Kure Logging In File Spaces User Environment and Applications, Compiling Job Management Outline
3
3 Logistics Course Format Lab Exercises Breaks UNC Research Computing http://its.unc.edu/research Getting started Topsail page http://help.unc.edu/6214 Getting started Kure page http://help.unc.edu/ccm3_015682
4
What is a compute cluster? What exactly is Topsail? Kure?
5
5 What is a compute cluster? Some Typical Components Compute Nodes Interconnect Shared File System Software Operating System (OS) Job Scheduler/Manager Mass Storage
6
6 Compute Cluster Advantages fast interconnect, tightly coupled aggregated compute resources large (scratch) file spaces installed software base scheduling and job management high availability data backup
7
7 Initial Topsail Cluster Initially: 1040 CPU Dell Linux Cluster 520 dual socket, single core nodes Infiniband interconnect Intended for capability research Housed in ITS Franklin machine room Fast and efficient for large computational jobs
8
8 Topsail Upgrade 1 Topsail upgraded to 4,160 CPU replaced blades with dual socket, quad core Intel Xeon 5345 (Clovertown) Processors Quad-Core with 8 CPU/node Increased number of processors, but decreased individual processor speed (was 3.6 GHz, now 2.33) Decreased energy usage and necessary resources for cooling system Summary: slower clock speed, better memory bandwidth, less heat, quadrupled the core count Benchmarks tend to run at the same speed per core Topsail shows a net ~4X improvement Of course, this number is VERY application dependent
9
9 Topsail – Upgraded blades 52 Chassis: Basis of node names Each holds 10 blades -> 520 blades total Nodes = cmp-chassis#-blade# Old Compute Blades: Dell PowerEdge 1855 2 Single core Intel Xeon EMT64T 3.6 GHZ procs 800 Mhz FSB 2MB L2 Cache per socket Intel NetBurst MicroArchitecture New Compute Blades: Dell PowerEdge 1955 2 Quad core Intel 2.33 GHz procs 1333 Mhz FSB 4MB L2 Cache per socket Intel Core 2 MicroArchitecture
10
10 Topsail Upgrade 2 Most recent Topsail upgrade (Feb/Mar ‘09) Refreshed much of the infrastructure Improved IBRIX filesystem Replaced and improved Infiniband cabling Moved cluster to ITS-Manning building Better cooling and UPS
11
11 Top 500 History Top 500 lists comes out twice a year ISC conference in June SC conference in Nov Topsail debuted at 74 in June 2006 Peaked at 25 in June 2007 Still in the Top 500
12
12 Current Topsail Architecture Login node: 8 CPU @ 2.3 GHz Intel EM64T, 12 GB memory Compute nodes: 4,160 CPU @ 2.3 GHz Intel EM64T, 12 GB memory Shared disk: 39TB IBRIX Parallel File System Interconnect: Infiniband 4x SDR 64bit Linux Operating System
13
13 Multi-Core Computing Processor Structure on Topsail 500+ nodes 2 sockets/node 1 processor/socket 4 cores/processor (Quad-core) 8 cores/node http://www.tomshardware.com/2006/12/06/quad-core-xeon-clovertown-rolls-into-dp-servers/page3.html
14
14 Multi-Core Computing The trend in High Performance Computing is towards multi-core or many core computing. More cores at slower clock speeds for less heat Now, dual and quad core processors are becoming common. Soon 64+ core processors will be common And these may be heterogeneous!
15
15 The Heat Problem Taken From: Jack Dongarra, UT
16
16 More Parallelism Taken From: Jack Dongarra, UT
17
17 Infiniband Connections Connection comes in single (SDR), double (DDR), and quad data rates (QDR). Topsail is SDR. Single data rate is 2.5 Gbit/s in each direction per link. Links can be aggregated - 1x, 4x, 12x. Topsail is 4x. Links use 8B/10B encoding —10 bits carry 8 bits of data — useful data transmission rate is four-fifths the raw rate. Thus single, double, and quad data rates carry 2, 4, or 8 Gbit/s respectively. Data rate for Topsail is 8 GB/s (4x SDR).
18
18 Topsail Network Topology
19
19 Infiniband Benchmarks Point-to-point (PTP) intranode communication on Topsail for various MPI send types Peak bandwidth: 1288 MB/s Minimum Latency (1-way): 3.6 s
20
20 Infiniband Benchmarks Scaled aggregate bandwidth for MPI Broadcast on Topsail Note good scaling throughout the tested range (from 24-1536 cores)
21
21 Kure The newest, “latest and greatest” compute cluster in RC Named after the beach in North Carolina It’s pronounced like the Nobel prize winning physicist and chemist, Madame Curie
22
22 Kure Compute Cluster Heterogeneous Research Cluster Hewlett Packard Blades 79 Compute Nodes, mostly Xeon 5560 2.8 GHz Nehalem Microarchitecture Dual socket, quad core 48 GB memory over 600 cores some higher memory nodes Infiniband 4x QDR priority usage for patrons Buy in is cheap Storage Scratch space same as emerald No AFS home
23
23 Kure Cont. The current configuration of Kure is mostly homogeneous but it will become increasingly heterogeneous as patrons and others add to it. Most login nodes are 48 GB but there are currently four high memory nodes 2 nodes each with 128 GB of memory 2 nodes each with 96 GB of memory
24
24 Topsail/Kure Comparison Topsail homogeneous 4000+ cores 2.33 GHz cores, Intel Core microarch. 12 GB memory/node IB 4x SDR interconnect Kure heterogeneous 600+ cores 2.8 Ghz cores, Intel Nehalem micorarch. 48 GB memory/node IB 4x QDR interconnect
25
25 Login to Topsail/Kure Use ssh to connect: ssh topsail.unc.edu ssh kure.unc.edu SSH Secure Shell with Windows see http://shareware.unc.edu/software.htmlhttp://shareware.unc.edu/software.html For use with X-Windows Display: ssh –X topsail.unc.edu or ssh –X kure.unc.edu ssh –Y topsail.unc.edu or ssh –Y kure.unc.edu Off-campus users (i.e. domains outside of unc.edu) must use VPN connection
26
File Spaces
27
27 Topsail File Space Home directories /ifs1/home/ anyone over 15 GB is not backed up Scratch Space /ifs1/scr/ over 39 TB of scratch space run jobs with large output in this space Mass Storage ~/ms
28
28 Kure File Space Home directories /nas02/home/ / / a = first letter of onyen, b = second letter of onyen hard limit of 15 GB Scratch Space – still evolving /nas – to be upgraded to 15 TB /largefs – to be upgraded to 30 TB run jobs with large output in these spaces Mass Storage ~/ms
29
29 Mass Storage “To infinity … and beyond” - Buzz Lightyear long term archival storage access via ~/ms looks like ordinary disk file system – data is actually stored on tape “limitless” capacity data is backed up For storage only, not a work directory (i.e. don’t run jobs from here) if you have many small files, use tar or zip to create a single file for better performance Sign up for this service on onyen.unc.edu
30
User Environment and Applications, Compiling Code Modules
31
31 Modules The user environment is managed by modules Modules modify the user environment by modifying and adding environment variables such as PATH or LD_LIBRARY_PATH Typically you set these once and leave them Note there are two module settings, one for your current environment and one to take affect on your next login (e.g. batch jobs running on compute nodes)
32
32 Common Module Commands module avail module avail apps module help module list module add module rm Login version module initlist module initadd module initrm More on modules see http://help.unc.edu/CCM3_006660
33
33 Parallel Jobs with MPI There are three implementations of the MPI standard installed: mvapich mvapich2 (currently only on topsail) openmpi Performance is similar for all three, all three run on the IB fabric. Mvapich is the default. Openmpi and mvapich2 have more the the MPI-2 features implemented.
34
34 Compiling MPI programs Use the MPI wrappers to compile your program mpicc, mpiCC, mpif90, mpif77 the wrappers will find the appropriate include files and libraries and then invoke the actual compiler for example, mpicc will invoke either gcc or icc depending upon which module you have loaded
35
35 Compiling on Topsail/Kure Serial Programming Intel Compiler Suite for Fortran77, Fortran90, C and C++, - Recommended by Research Computing icc, icpc, ifort GNU gcc, g++, gfortran Parallel Programming MPI (see previous page) OpenMP Compiler tag: -openmp for Intel -fopenmp for GNU Must set OMP_NUM_THREADS in submission script
36
36 Debugging - Totalview If you are debugging code there is a powerful commercial debugger, totalview See http://help.unc.edu/CCM3_021717http://help.unc.edu/CCM3_021717 parallel and serial code Fortran/C/C++ GUI for source level control too many features to list!
37
Job Scheduling and Management
38
38 What does a Job Scheduler and batch system do? Manage Resources allocate user tasks to resource monitor tasks process control manage input and output report status, availability, etc enforce usage policies
39
39 Job Scheduling Systems Allocates compute nodes to job submissions based on user priority, requested resources, execution time, etc. Many types of schedulers Load Sharing Facility (LSF) – Used by Topsail/Kure IBM LoadLeveler Portable Batch System (PBS) Sun Grid Engine (SGE)
40
40 LSF All Research Computing clusters use LSF to do job scheduling and management LSF (Load Sharing Facility) is a (licensed) product from Platform Computing Fairly distribute compute nodes among users enforce usage policies for established queues most common queues: int, now, week, month RC uses Fair Share scheduling, not first come, first served (FCFS) LSF commands typically start with the letter b (as in batch), e.g. bsub, bqueues, bjobs, bhosts, … see man pages for much more info!
41
41 Simplified view of LSF bsub –n 64 –a mvapich –q week mpirun myjob Login Node Jobs Queued job routed to queue job_J job_F myjob job_7 job dispatched to run on available host which satisfies job requirements user logged in to login node submits job
42
42 Running Programs on Topsail Upon ssh to Topsail/Kure, you are on the Login node. Programs SHOULD NOT be run on Login node. Submit programs to one of the many, many compute nodes. Submit jobs using Load Sharing Facility (LSF) via the bsub command.
43
43 Common batch commands bsub - submit jobs bqueues – view info on defined queues bqueues –l week bkill – stop/cancel submitted job bjobs – view submitted jobs bjobs –u all bhist – job history bhist –l
44
44 Common batch commands bhosts – status and resources of hosts (nodes) bpeek – display output of running job Use man pages to get much more info! man bjobs
45
45 Submitting Jobs: bsub Command Submit Jobs - bsub Run large jobs out of scratch space, smaller jobs can run out of your home space bsub [-bsub_opts] executable [-exec_opts] Common bsub options: –o –o out.%J -q -q week -R “resource specification” -R “span[ptile=8]” -n used for parallel, MPI jobs -a -a mvapich(used on MPI jobs)
46
46 Two methods to submit jobs: bsub example: submit the executable job, myexe, to the week queue and redirect output to the file out. (default is to mail output) Method 1: Command Line bsub –q week –o out.%J myexe Method 2: Create a file (details to follow) called, for example, myexe.bsub, and then submit that file. Note the redirect symbol, < bsub < myexe.bsub
47
47 Method 2 cont. The file you submitted will contain all the bsub options you want in it, so for this example myexe.bsub will look like this #BSUB –q week #BSUB –o out.%J myexe This is actually a shell script so the top line could be the normal #!/bin/csh, etc and you can run any commands you would like. if this doesn’t mean anything to you then nevermind :)
48
48 Parallel Job example Batch Command Line Method bsub –q week –o out.%J -n 64 -a mvapich mpirun myParallelExe Batch File Method bsub < myexe.bsub where myexe.bsub will look like this #BSUB –q week #BSUB –o out.%J #BSUB –a mvapich #BSUB –n 64 mpirun myParallelExe
49
49 Some Topsail Queues QueueTime LimitJobs/UserCPU/Job int2 hrs128--- debug2 hrs64--- day24 hrs5124 – 128 week1 week5124 – 128 512cpu4 days51232 – 512 128cpu4 days51232 – 128 32cpu2 days5124 – 32 chunk4 days512Batch Jobs For access to the 512cpu queue the scalability must be demonstrated
50
50 Some Kure Queues QueueTime LimitJobs/User int10 hrs2 debug5 minutes32 bigmem1 week8 week1 week- patronsnone- Most users have a 32 job slots limit unless they have been granted extra slots. Queues are always subject to change and probably will change as Kure production ramps up. Use the bqueues command to find the current status
51
51 Common Error 1 If job immediately dies, check err.%J file err.%J file has error: Can't read MPIRUN_HOST Problem: MPI enivronment settings were not correctly applied on compute node Solution: Include mpirun in bsub command
52
52 Common Error 2 Job immediately dies after submission err.%J file is blank Problem: ssh passwords and keys were not correctly setup at initial login to Topsail Solution: cd ~/.ssh/ mv id_rsa id_rsa-orig mv id_rsa.pub id_rsa.pub-orig Logout of Topsail Login to Topsail and accept all defaults
53
53 Interactive Jobs To run long shell scripts on Topsail or Kure, use int queue bsub –q int –Ip /bin/bash This bsub command provides a prompt on compute node Can run program or shell script interactively from compute node
54
54 Specialty Scripts There are specialty scripts provided on Kure for the user convenience. Batch scripts bmatlab, bsas, bstata X-window scripts xmatlab, xsas, xstata Interactive scripts imatlab, istata
55
55 MPI/OpenMP Training Courses are taught throughout year by Research Computing http://learnit.unc.edu/workshops http://help.unc.edu/CCM3_008194 See schedule for next course MPI OpenMP
56
56 Further Help with Topsail/Kure More details can be found on the Getting Started help documents: http://help.unc.edu/?id=6214 - Topsail http://help.unc.edu/?id=6214 http://help.unc.edu/ccm3_015682 - Kure http://help.unc.edu/ccm3_015682 http://keel.isis.unc.edu/wordpress/ - ON CAMPUS http://keel.isis.unc.edu/wordpress/ For assistance with Topsail/Kure, please contact the ITS Research Computing group Email: research@unc.eduresearch@unc.edu Phone: 919-962-HELP Submit help ticket at http://help.unc.eduhttp://help.unc.edu For immediate assistance, see manual pages man
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.